00:00:00.001 Started by upstream project "autotest-per-patch" build number 127085 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.093 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.239 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.239 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.609 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.621 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.632 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.632 > git config core.sparsecheckout # timeout=10 00:00:05.646 > git read-tree -mu HEAD # timeout=10 00:00:05.664 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.700 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.700 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.803 [Pipeline] Start of Pipeline 00:00:05.818 [Pipeline] library 00:00:05.820 Loading library shm_lib@master 00:00:05.820 Library shm_lib@master is cached. Copying from home. 00:00:05.835 [Pipeline] node 00:00:05.848 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.849 [Pipeline] { 00:00:05.857 [Pipeline] catchError 00:00:05.859 [Pipeline] { 00:00:05.872 [Pipeline] wrap 00:00:05.882 [Pipeline] { 00:00:05.889 [Pipeline] stage 00:00:05.891 [Pipeline] { (Prologue) 00:00:06.072 [Pipeline] sh 00:00:06.356 + logger -p user.info -t JENKINS-CI 00:00:06.370 [Pipeline] echo 00:00:06.371 Node: GP8 00:00:06.377 [Pipeline] sh 00:00:06.676 [Pipeline] setCustomBuildProperty 00:00:06.686 [Pipeline] echo 00:00:06.688 Cleanup processes 00:00:06.693 [Pipeline] sh 00:00:06.977 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.977 1430845 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.990 [Pipeline] sh 00:00:07.272 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.272 ++ grep -v 'sudo pgrep' 00:00:07.272 ++ awk '{print $1}' 00:00:07.272 + sudo kill -9 00:00:07.272 + true 00:00:07.286 [Pipeline] cleanWs 00:00:07.296 [WS-CLEANUP] Deleting project workspace... 00:00:07.296 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.303 [WS-CLEANUP] done 00:00:07.306 [Pipeline] setCustomBuildProperty 00:00:07.321 [Pipeline] sh 00:00:07.624 + sudo git config --global --replace-all safe.directory '*' 00:00:07.694 [Pipeline] httpRequest 00:00:07.725 [Pipeline] echo 00:00:07.727 Sorcerer 10.211.164.101 is alive 00:00:07.736 [Pipeline] httpRequest 00:00:07.746 HttpMethod: GET 00:00:07.748 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.750 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.767 Response Code: HTTP/1.1 200 OK 00:00:07.770 Success: Status code 200 is in the accepted range: 200,404 00:00:07.772 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.366 [Pipeline] sh 00:00:09.650 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.665 [Pipeline] httpRequest 00:00:09.691 [Pipeline] echo 00:00:09.693 Sorcerer 10.211.164.101 is alive 00:00:09.701 [Pipeline] httpRequest 00:00:09.706 HttpMethod: GET 00:00:09.706 URL: http://10.211.164.101/packages/spdk_74f92fe69a974e537bd1cc41e35f022d1c0b6518.tar.gz 00:00:09.707 Sending request to url: http://10.211.164.101/packages/spdk_74f92fe69a974e537bd1cc41e35f022d1c0b6518.tar.gz 00:00:09.728 Response Code: HTTP/1.1 200 OK 00:00:09.728 Success: Status code 200 is in the accepted range: 200,404 00:00:09.729 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_74f92fe69a974e537bd1cc41e35f022d1c0b6518.tar.gz 00:00:40.130 [Pipeline] sh 00:00:40.414 + tar --no-same-owner -xf spdk_74f92fe69a974e537bd1cc41e35f022d1c0b6518.tar.gz 00:00:44.623 [Pipeline] sh 00:00:44.913 + git -C spdk log --oneline -n5 00:00:44.913 74f92fe69 raid: complete bdev_raid_create after sb is written 00:00:44.913 d005e023b raid: fix empty slot not updated in sb after resize 00:00:44.913 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:44.913 8ee2672c4 test/bdev: Add test for resized RAID with superblock 00:00:44.913 19f5787c8 raid: skip configured base bdevs in sb examine 00:00:44.926 [Pipeline] } 00:00:44.944 [Pipeline] // stage 00:00:44.955 [Pipeline] stage 00:00:44.957 [Pipeline] { (Prepare) 00:00:44.976 [Pipeline] writeFile 00:00:44.993 [Pipeline] sh 00:00:45.275 + logger -p user.info -t JENKINS-CI 00:00:45.289 [Pipeline] sh 00:00:45.572 + logger -p user.info -t JENKINS-CI 00:00:45.586 [Pipeline] sh 00:00:45.867 + cat autorun-spdk.conf 00:00:45.868 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.868 SPDK_TEST_NVMF=1 00:00:45.868 SPDK_TEST_NVME_CLI=1 00:00:45.868 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.868 SPDK_TEST_NVMF_NICS=e810 00:00:45.868 SPDK_TEST_VFIOUSER=1 00:00:45.868 SPDK_RUN_UBSAN=1 00:00:45.868 NET_TYPE=phy 00:00:45.875 RUN_NIGHTLY=0 00:00:45.880 [Pipeline] readFile 00:00:45.907 [Pipeline] withEnv 00:00:45.910 [Pipeline] { 00:00:45.924 [Pipeline] sh 00:00:46.209 + set -ex 00:00:46.209 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:46.209 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.209 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.209 ++ SPDK_TEST_NVMF=1 00:00:46.209 ++ SPDK_TEST_NVME_CLI=1 00:00:46.209 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.209 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.209 ++ SPDK_TEST_VFIOUSER=1 00:00:46.209 ++ SPDK_RUN_UBSAN=1 00:00:46.209 ++ NET_TYPE=phy 00:00:46.209 ++ RUN_NIGHTLY=0 00:00:46.209 + case $SPDK_TEST_NVMF_NICS in 00:00:46.209 + DRIVERS=ice 00:00:46.209 + [[ tcp == \r\d\m\a ]] 00:00:46.209 + [[ -n ice ]] 00:00:46.209 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:46.209 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.209 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:46.209 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.209 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.209 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.209 + true 00:00:46.209 + for D in $DRIVERS 00:00:46.209 + sudo modprobe ice 00:00:46.209 + exit 0 00:00:46.218 [Pipeline] } 00:00:46.235 [Pipeline] // withEnv 00:00:46.240 [Pipeline] } 00:00:46.253 [Pipeline] // stage 00:00:46.264 [Pipeline] catchError 00:00:46.266 [Pipeline] { 00:00:46.279 [Pipeline] timeout 00:00:46.279 Timeout set to expire in 50 min 00:00:46.280 [Pipeline] { 00:00:46.296 [Pipeline] stage 00:00:46.297 [Pipeline] { (Tests) 00:00:46.312 [Pipeline] sh 00:00:46.596 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.596 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.596 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.596 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:46.596 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.596 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.596 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:46.596 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.596 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.596 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.596 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:46.596 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.596 + source /etc/os-release 00:00:46.596 ++ NAME='Fedora Linux' 00:00:46.596 ++ VERSION='38 (Cloud Edition)' 00:00:46.596 ++ ID=fedora 00:00:46.596 ++ VERSION_ID=38 00:00:46.596 ++ VERSION_CODENAME= 00:00:46.596 ++ PLATFORM_ID=platform:f38 00:00:46.596 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:46.596 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.596 ++ LOGO=fedora-logo-icon 00:00:46.596 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:46.596 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.596 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:46.596 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.596 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.596 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.596 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:46.596 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.596 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:46.596 ++ SUPPORT_END=2024-05-14 00:00:46.596 ++ VARIANT='Cloud Edition' 00:00:46.596 ++ VARIANT_ID=cloud 00:00:46.596 + uname -a 00:00:46.596 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:46.596 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:47.973 Hugepages 00:00:47.973 node hugesize free / total 00:00:47.973 node0 1048576kB 0 / 0 00:00:47.973 node0 2048kB 0 / 0 00:00:47.973 node1 1048576kB 0 / 0 00:00:47.973 node1 2048kB 0 / 0 00:00:47.973 00:00:47.973 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:47.973 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:47.973 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:47.973 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:47.973 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:47.973 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:47.973 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:47.973 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:47.973 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:47.973 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:47.973 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:47.973 + rm -f /tmp/spdk-ld-path 00:00:47.973 + source autorun-spdk.conf 00:00:47.973 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.973 ++ SPDK_TEST_NVMF=1 00:00:47.973 ++ SPDK_TEST_NVME_CLI=1 00:00:47.973 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.973 ++ SPDK_TEST_NVMF_NICS=e810 00:00:47.973 ++ SPDK_TEST_VFIOUSER=1 00:00:47.973 ++ SPDK_RUN_UBSAN=1 00:00:47.973 ++ NET_TYPE=phy 00:00:47.973 ++ RUN_NIGHTLY=0 00:00:47.973 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:47.973 + [[ -n '' ]] 00:00:47.973 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.973 + for M in /var/spdk/build-*-manifest.txt 00:00:47.973 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:47.973 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.973 + for M in /var/spdk/build-*-manifest.txt 00:00:47.973 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:47.973 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:47.973 ++ uname 00:00:47.973 + [[ Linux == \L\i\n\u\x ]] 00:00:47.973 + sudo dmesg -T 00:00:47.973 + sudo dmesg --clear 00:00:47.973 + dmesg_pid=1431525 00:00:47.973 + [[ Fedora Linux == FreeBSD ]] 00:00:47.973 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.973 + sudo dmesg -Tw 00:00:47.973 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:47.973 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:47.974 + [[ -x /usr/src/fio-static/fio ]] 00:00:47.974 + export FIO_BIN=/usr/src/fio-static/fio 00:00:47.974 + FIO_BIN=/usr/src/fio-static/fio 00:00:47.974 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:47.974 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:47.974 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:47.974 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.974 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:47.974 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:47.974 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.974 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:47.974 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:47.974 Test configuration: 00:00:47.974 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:47.974 SPDK_TEST_NVMF=1 00:00:47.974 SPDK_TEST_NVME_CLI=1 00:00:47.974 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:47.974 SPDK_TEST_NVMF_NICS=e810 00:00:47.974 SPDK_TEST_VFIOUSER=1 00:00:47.974 SPDK_RUN_UBSAN=1 00:00:47.974 NET_TYPE=phy 00:00:48.233 RUN_NIGHTLY=0 18:52:53 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.233 18:52:53 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.233 18:52:53 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.233 18:52:53 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.233 18:52:53 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.233 18:52:53 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.233 18:52:53 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.233 18:52:53 -- paths/export.sh@5 -- $ export PATH 00:00:48.233 18:52:53 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.233 18:52:53 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.233 18:52:53 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:48.233 18:52:53 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721839973.XXXXXX 00:00:48.233 18:52:53 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721839973.Jm1YVy 00:00:48.233 18:52:53 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:48.233 18:52:53 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:48.233 18:52:53 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:48.233 18:52:53 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.233 18:52:53 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.233 18:52:53 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:48.233 18:52:53 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:48.233 18:52:53 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.233 18:52:53 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:48.233 18:52:53 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:48.233 18:52:53 -- pm/common@17 -- $ local monitor 00:00:48.233 18:52:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.233 18:52:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.233 18:52:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.233 18:52:53 -- pm/common@21 -- $ date +%s 00:00:48.233 18:52:53 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.234 18:52:53 -- pm/common@25 -- $ sleep 1 00:00:48.234 18:52:53 -- pm/common@21 -- $ date +%s 00:00:48.234 18:52:53 -- pm/common@21 -- $ date +%s 00:00:48.234 18:52:53 -- pm/common@21 -- $ date +%s 00:00:48.234 18:52:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839973 00:00:48.234 18:52:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839973 00:00:48.234 18:52:53 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839973 00:00:48.234 18:52:53 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721839973 00:00:48.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839973_collect-vmstat.pm.log 00:00:48.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839973_collect-cpu-load.pm.log 00:00:48.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839973_collect-cpu-temp.pm.log 00:00:48.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721839973_collect-bmc-pm.bmc.pm.log 00:00:49.168 18:52:54 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:49.168 18:52:54 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.168 18:52:54 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.168 18:52:54 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.168 18:52:54 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.168 Wed Jul 24 04:52:54 PM UTC 2024 00:00:49.168 18:52:54 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.168 v24.09-pre-319-g74f92fe69 00:00:49.168 18:52:54 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.168 18:52:54 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.168 18:52:54 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.168 18:52:54 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:49.168 18:52:54 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:49.168 18:52:54 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.168 ************************************ 00:00:49.168 START TEST ubsan 00:00:49.168 ************************************ 00:00:49.168 18:52:54 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:49.168 using ubsan 00:00:49.168 00:00:49.168 real 0m0.000s 00:00:49.168 user 0m0.000s 00:00:49.168 sys 0m0.000s 00:00:49.168 18:52:54 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:49.168 18:52:54 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:49.168 ************************************ 00:00:49.168 END TEST ubsan 00:00:49.168 ************************************ 00:00:49.168 18:52:54 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.168 18:52:54 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.168 18:52:54 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.168 18:52:54 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.168 18:52:54 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.168 18:52:54 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.168 18:52:54 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.168 18:52:54 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.168 18:52:54 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:49.427 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:49.427 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:49.996 Using 'verbs' RDMA provider 00:01:05.816 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:20.728 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:20.728 Creating mk/config.mk...done. 00:01:20.728 Creating mk/cc.flags.mk...done. 00:01:20.728 Type 'make' to build. 00:01:20.728 18:53:26 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:20.728 18:53:26 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.728 18:53:26 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.728 18:53:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.728 ************************************ 00:01:20.728 START TEST make 00:01:20.728 ************************************ 00:01:20.728 18:53:26 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:21.301 make[1]: Nothing to be done for 'all'. 00:01:23.229 The Meson build system 00:01:23.230 Version: 1.3.1 00:01:23.230 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:23.230 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:23.230 Build type: native build 00:01:23.230 Project name: libvfio-user 00:01:23.230 Project version: 0.0.1 00:01:23.230 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:23.230 C linker for the host machine: cc ld.bfd 2.39-16 00:01:23.230 Host machine cpu family: x86_64 00:01:23.230 Host machine cpu: x86_64 00:01:23.230 Run-time dependency threads found: YES 00:01:23.230 Library dl found: YES 00:01:23.230 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:23.230 Run-time dependency json-c found: YES 0.17 00:01:23.230 Run-time dependency cmocka found: YES 1.1.7 00:01:23.230 Program pytest-3 found: NO 00:01:23.230 Program flake8 found: NO 00:01:23.230 Program misspell-fixer found: NO 00:01:23.230 Program restructuredtext-lint found: NO 00:01:23.230 Program valgrind found: YES (/usr/bin/valgrind) 00:01:23.230 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:23.230 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:23.230 Compiler for C supports arguments -Wwrite-strings: YES 00:01:23.230 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:23.230 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:23.230 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:23.230 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:23.230 Build targets in project: 8 00:01:23.230 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:23.230 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:23.230 00:01:23.230 libvfio-user 0.0.1 00:01:23.230 00:01:23.230 User defined options 00:01:23.230 buildtype : debug 00:01:23.230 default_library: shared 00:01:23.230 libdir : /usr/local/lib 00:01:23.230 00:01:23.230 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:23.811 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:24.075 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:24.075 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:24.075 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:24.075 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:24.075 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:24.075 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:24.075 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:24.075 [8/37] Compiling C object samples/null.p/null.c.o 00:01:24.075 [9/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:24.075 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:24.335 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:24.335 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:24.335 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:24.335 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:24.335 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:24.335 [16/37] Compiling C object samples/server.p/server.c.o 00:01:24.335 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:24.335 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:24.335 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:24.335 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:24.335 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:24.335 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:24.335 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:24.335 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:24.335 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:24.335 [26/37] Compiling C object samples/client.p/client.c.o 00:01:24.335 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:24.335 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:24.597 [29/37] Linking target samples/client 00:01:24.597 [30/37] Linking target test/unit_tests 00:01:24.597 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:24.863 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:24.863 [33/37] Linking target samples/gpio-pci-idio-16 00:01:24.863 [34/37] Linking target samples/server 00:01:24.863 [35/37] Linking target samples/lspci 00:01:24.863 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:24.863 [37/37] Linking target samples/null 00:01:24.863 INFO: autodetecting backend as ninja 00:01:24.863 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.863 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.806 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.806 ninja: no work to do. 00:01:32.372 The Meson build system 00:01:32.372 Version: 1.3.1 00:01:32.372 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:32.372 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:32.372 Build type: native build 00:01:32.372 Program cat found: YES (/usr/bin/cat) 00:01:32.372 Project name: DPDK 00:01:32.373 Project version: 24.03.0 00:01:32.373 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:32.373 C linker for the host machine: cc ld.bfd 2.39-16 00:01:32.373 Host machine cpu family: x86_64 00:01:32.373 Host machine cpu: x86_64 00:01:32.373 Message: ## Building in Developer Mode ## 00:01:32.373 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:32.373 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:32.373 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:32.373 Program python3 found: YES (/usr/bin/python3) 00:01:32.373 Program cat found: YES (/usr/bin/cat) 00:01:32.373 Compiler for C supports arguments -march=native: YES 00:01:32.373 Checking for size of "void *" : 8 00:01:32.373 Checking for size of "void *" : 8 (cached) 00:01:32.373 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:32.373 Library m found: YES 00:01:32.373 Library numa found: YES 00:01:32.373 Has header "numaif.h" : YES 00:01:32.373 Library fdt found: NO 00:01:32.373 Library execinfo found: NO 00:01:32.373 Has header "execinfo.h" : YES 00:01:32.373 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:32.373 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:32.373 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:32.373 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:32.373 Run-time dependency openssl found: YES 3.0.9 00:01:32.373 Run-time dependency libpcap found: YES 1.10.4 00:01:32.373 Has header "pcap.h" with dependency libpcap: YES 00:01:32.373 Compiler for C supports arguments -Wcast-qual: YES 00:01:32.373 Compiler for C supports arguments -Wdeprecated: YES 00:01:32.373 Compiler for C supports arguments -Wformat: YES 00:01:32.373 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:32.373 Compiler for C supports arguments -Wformat-security: NO 00:01:32.373 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.373 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:32.373 Compiler for C supports arguments -Wnested-externs: YES 00:01:32.373 Compiler for C supports arguments -Wold-style-definition: YES 00:01:32.373 Compiler for C supports arguments -Wpointer-arith: YES 00:01:32.373 Compiler for C supports arguments -Wsign-compare: YES 00:01:32.373 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:32.373 Compiler for C supports arguments -Wundef: YES 00:01:32.373 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.373 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:32.373 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:32.373 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.373 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:32.373 Program objdump found: YES (/usr/bin/objdump) 00:01:32.373 Compiler for C supports arguments -mavx512f: YES 00:01:32.373 Checking if "AVX512 checking" compiles: YES 00:01:32.373 Fetching value of define "__SSE4_2__" : 1 00:01:32.373 Fetching value of define "__AES__" : 1 00:01:32.373 Fetching value of define "__AVX__" : 1 00:01:32.373 Fetching value of define "__AVX2__" : (undefined) 00:01:32.373 Fetching value of define "__AVX512BW__" : (undefined) 00:01:32.373 Fetching value of define "__AVX512CD__" : (undefined) 00:01:32.373 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:32.373 Fetching value of define "__AVX512F__" : (undefined) 00:01:32.373 Fetching value of define "__AVX512VL__" : (undefined) 00:01:32.373 Fetching value of define "__PCLMUL__" : 1 00:01:32.373 Fetching value of define "__RDRND__" : 1 00:01:32.373 Fetching value of define "__RDSEED__" : (undefined) 00:01:32.373 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:32.373 Fetching value of define "__znver1__" : (undefined) 00:01:32.373 Fetching value of define "__znver2__" : (undefined) 00:01:32.373 Fetching value of define "__znver3__" : (undefined) 00:01:32.373 Fetching value of define "__znver4__" : (undefined) 00:01:32.373 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:32.373 Message: lib/log: Defining dependency "log" 00:01:32.373 Message: lib/kvargs: Defining dependency "kvargs" 00:01:32.373 Message: lib/telemetry: Defining dependency "telemetry" 00:01:32.373 Checking for function "getentropy" : NO 00:01:32.373 Message: lib/eal: Defining dependency "eal" 00:01:32.373 Message: lib/ring: Defining dependency "ring" 00:01:32.373 Message: lib/rcu: Defining dependency "rcu" 00:01:32.373 Message: lib/mempool: Defining dependency "mempool" 00:01:32.373 Message: lib/mbuf: Defining dependency "mbuf" 00:01:32.373 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:32.373 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.373 Compiler for C supports arguments -mpclmul: YES 00:01:32.373 Compiler for C supports arguments -maes: YES 00:01:32.373 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.373 Compiler for C supports arguments -mavx512bw: YES 00:01:32.373 Compiler for C supports arguments -mavx512dq: YES 00:01:32.373 Compiler for C supports arguments -mavx512vl: YES 00:01:32.373 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:32.373 Compiler for C supports arguments -mavx2: YES 00:01:32.373 Compiler for C supports arguments -mavx: YES 00:01:32.373 Message: lib/net: Defining dependency "net" 00:01:32.373 Message: lib/meter: Defining dependency "meter" 00:01:32.373 Message: lib/ethdev: Defining dependency "ethdev" 00:01:32.373 Message: lib/pci: Defining dependency "pci" 00:01:32.373 Message: lib/cmdline: Defining dependency "cmdline" 00:01:32.373 Message: lib/hash: Defining dependency "hash" 00:01:32.373 Message: lib/timer: Defining dependency "timer" 00:01:32.373 Message: lib/compressdev: Defining dependency "compressdev" 00:01:32.373 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:32.373 Message: lib/dmadev: Defining dependency "dmadev" 00:01:32.373 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:32.373 Message: lib/power: Defining dependency "power" 00:01:32.373 Message: lib/reorder: Defining dependency "reorder" 00:01:32.373 Message: lib/security: Defining dependency "security" 00:01:32.373 Has header "linux/userfaultfd.h" : YES 00:01:32.373 Has header "linux/vduse.h" : YES 00:01:32.373 Message: lib/vhost: Defining dependency "vhost" 00:01:32.373 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:32.373 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:32.373 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:32.373 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:32.373 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:32.373 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:32.373 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:32.373 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:32.373 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:32.373 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:32.373 Program doxygen found: YES (/usr/bin/doxygen) 00:01:32.373 Configuring doxy-api-html.conf using configuration 00:01:32.373 Configuring doxy-api-man.conf using configuration 00:01:32.373 Program mandb found: YES (/usr/bin/mandb) 00:01:32.373 Program sphinx-build found: NO 00:01:32.373 Configuring rte_build_config.h using configuration 00:01:32.373 Message: 00:01:32.373 ================= 00:01:32.373 Applications Enabled 00:01:32.373 ================= 00:01:32.373 00:01:32.373 apps: 00:01:32.373 00:01:32.373 00:01:32.373 Message: 00:01:32.373 ================= 00:01:32.373 Libraries Enabled 00:01:32.373 ================= 00:01:32.373 00:01:32.373 libs: 00:01:32.373 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:32.373 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:32.373 cryptodev, dmadev, power, reorder, security, vhost, 00:01:32.373 00:01:32.373 Message: 00:01:32.373 =============== 00:01:32.373 Drivers Enabled 00:01:32.373 =============== 00:01:32.373 00:01:32.373 common: 00:01:32.373 00:01:32.373 bus: 00:01:32.373 pci, vdev, 00:01:32.373 mempool: 00:01:32.373 ring, 00:01:32.373 dma: 00:01:32.373 00:01:32.373 net: 00:01:32.373 00:01:32.373 crypto: 00:01:32.373 00:01:32.373 compress: 00:01:32.373 00:01:32.373 vdpa: 00:01:32.373 00:01:32.373 00:01:32.373 Message: 00:01:32.373 ================= 00:01:32.373 Content Skipped 00:01:32.373 ================= 00:01:32.373 00:01:32.373 apps: 00:01:32.373 dumpcap: explicitly disabled via build config 00:01:32.374 graph: explicitly disabled via build config 00:01:32.374 pdump: explicitly disabled via build config 00:01:32.374 proc-info: explicitly disabled via build config 00:01:32.374 test-acl: explicitly disabled via build config 00:01:32.374 test-bbdev: explicitly disabled via build config 00:01:32.374 test-cmdline: explicitly disabled via build config 00:01:32.374 test-compress-perf: explicitly disabled via build config 00:01:32.374 test-crypto-perf: explicitly disabled via build config 00:01:32.374 test-dma-perf: explicitly disabled via build config 00:01:32.374 test-eventdev: explicitly disabled via build config 00:01:32.374 test-fib: explicitly disabled via build config 00:01:32.374 test-flow-perf: explicitly disabled via build config 00:01:32.374 test-gpudev: explicitly disabled via build config 00:01:32.374 test-mldev: explicitly disabled via build config 00:01:32.374 test-pipeline: explicitly disabled via build config 00:01:32.374 test-pmd: explicitly disabled via build config 00:01:32.374 test-regex: explicitly disabled via build config 00:01:32.374 test-sad: explicitly disabled via build config 00:01:32.374 test-security-perf: explicitly disabled via build config 00:01:32.374 00:01:32.374 libs: 00:01:32.374 argparse: explicitly disabled via build config 00:01:32.374 metrics: explicitly disabled via build config 00:01:32.374 acl: explicitly disabled via build config 00:01:32.374 bbdev: explicitly disabled via build config 00:01:32.374 bitratestats: explicitly disabled via build config 00:01:32.374 bpf: explicitly disabled via build config 00:01:32.374 cfgfile: explicitly disabled via build config 00:01:32.374 distributor: explicitly disabled via build config 00:01:32.374 efd: explicitly disabled via build config 00:01:32.374 eventdev: explicitly disabled via build config 00:01:32.374 dispatcher: explicitly disabled via build config 00:01:32.374 gpudev: explicitly disabled via build config 00:01:32.374 gro: explicitly disabled via build config 00:01:32.374 gso: explicitly disabled via build config 00:01:32.374 ip_frag: explicitly disabled via build config 00:01:32.374 jobstats: explicitly disabled via build config 00:01:32.374 latencystats: explicitly disabled via build config 00:01:32.374 lpm: explicitly disabled via build config 00:01:32.374 member: explicitly disabled via build config 00:01:32.374 pcapng: explicitly disabled via build config 00:01:32.374 rawdev: explicitly disabled via build config 00:01:32.374 regexdev: explicitly disabled via build config 00:01:32.374 mldev: explicitly disabled via build config 00:01:32.374 rib: explicitly disabled via build config 00:01:32.374 sched: explicitly disabled via build config 00:01:32.374 stack: explicitly disabled via build config 00:01:32.374 ipsec: explicitly disabled via build config 00:01:32.374 pdcp: explicitly disabled via build config 00:01:32.374 fib: explicitly disabled via build config 00:01:32.374 port: explicitly disabled via build config 00:01:32.374 pdump: explicitly disabled via build config 00:01:32.374 table: explicitly disabled via build config 00:01:32.374 pipeline: explicitly disabled via build config 00:01:32.374 graph: explicitly disabled via build config 00:01:32.374 node: explicitly disabled via build config 00:01:32.374 00:01:32.374 drivers: 00:01:32.374 common/cpt: not in enabled drivers build config 00:01:32.374 common/dpaax: not in enabled drivers build config 00:01:32.374 common/iavf: not in enabled drivers build config 00:01:32.374 common/idpf: not in enabled drivers build config 00:01:32.374 common/ionic: not in enabled drivers build config 00:01:32.374 common/mvep: not in enabled drivers build config 00:01:32.374 common/octeontx: not in enabled drivers build config 00:01:32.374 bus/auxiliary: not in enabled drivers build config 00:01:32.374 bus/cdx: not in enabled drivers build config 00:01:32.374 bus/dpaa: not in enabled drivers build config 00:01:32.374 bus/fslmc: not in enabled drivers build config 00:01:32.374 bus/ifpga: not in enabled drivers build config 00:01:32.374 bus/platform: not in enabled drivers build config 00:01:32.374 bus/uacce: not in enabled drivers build config 00:01:32.374 bus/vmbus: not in enabled drivers build config 00:01:32.374 common/cnxk: not in enabled drivers build config 00:01:32.374 common/mlx5: not in enabled drivers build config 00:01:32.374 common/nfp: not in enabled drivers build config 00:01:32.374 common/nitrox: not in enabled drivers build config 00:01:32.374 common/qat: not in enabled drivers build config 00:01:32.374 common/sfc_efx: not in enabled drivers build config 00:01:32.374 mempool/bucket: not in enabled drivers build config 00:01:32.374 mempool/cnxk: not in enabled drivers build config 00:01:32.374 mempool/dpaa: not in enabled drivers build config 00:01:32.374 mempool/dpaa2: not in enabled drivers build config 00:01:32.374 mempool/octeontx: not in enabled drivers build config 00:01:32.374 mempool/stack: not in enabled drivers build config 00:01:32.374 dma/cnxk: not in enabled drivers build config 00:01:32.374 dma/dpaa: not in enabled drivers build config 00:01:32.374 dma/dpaa2: not in enabled drivers build config 00:01:32.374 dma/hisilicon: not in enabled drivers build config 00:01:32.374 dma/idxd: not in enabled drivers build config 00:01:32.374 dma/ioat: not in enabled drivers build config 00:01:32.374 dma/skeleton: not in enabled drivers build config 00:01:32.374 net/af_packet: not in enabled drivers build config 00:01:32.374 net/af_xdp: not in enabled drivers build config 00:01:32.374 net/ark: not in enabled drivers build config 00:01:32.374 net/atlantic: not in enabled drivers build config 00:01:32.374 net/avp: not in enabled drivers build config 00:01:32.374 net/axgbe: not in enabled drivers build config 00:01:32.374 net/bnx2x: not in enabled drivers build config 00:01:32.374 net/bnxt: not in enabled drivers build config 00:01:32.374 net/bonding: not in enabled drivers build config 00:01:32.374 net/cnxk: not in enabled drivers build config 00:01:32.374 net/cpfl: not in enabled drivers build config 00:01:32.374 net/cxgbe: not in enabled drivers build config 00:01:32.374 net/dpaa: not in enabled drivers build config 00:01:32.374 net/dpaa2: not in enabled drivers build config 00:01:32.374 net/e1000: not in enabled drivers build config 00:01:32.374 net/ena: not in enabled drivers build config 00:01:32.374 net/enetc: not in enabled drivers build config 00:01:32.374 net/enetfec: not in enabled drivers build config 00:01:32.374 net/enic: not in enabled drivers build config 00:01:32.374 net/failsafe: not in enabled drivers build config 00:01:32.374 net/fm10k: not in enabled drivers build config 00:01:32.374 net/gve: not in enabled drivers build config 00:01:32.374 net/hinic: not in enabled drivers build config 00:01:32.374 net/hns3: not in enabled drivers build config 00:01:32.374 net/i40e: not in enabled drivers build config 00:01:32.374 net/iavf: not in enabled drivers build config 00:01:32.374 net/ice: not in enabled drivers build config 00:01:32.374 net/idpf: not in enabled drivers build config 00:01:32.374 net/igc: not in enabled drivers build config 00:01:32.374 net/ionic: not in enabled drivers build config 00:01:32.374 net/ipn3ke: not in enabled drivers build config 00:01:32.374 net/ixgbe: not in enabled drivers build config 00:01:32.374 net/mana: not in enabled drivers build config 00:01:32.374 net/memif: not in enabled drivers build config 00:01:32.374 net/mlx4: not in enabled drivers build config 00:01:32.374 net/mlx5: not in enabled drivers build config 00:01:32.374 net/mvneta: not in enabled drivers build config 00:01:32.374 net/mvpp2: not in enabled drivers build config 00:01:32.374 net/netvsc: not in enabled drivers build config 00:01:32.374 net/nfb: not in enabled drivers build config 00:01:32.374 net/nfp: not in enabled drivers build config 00:01:32.374 net/ngbe: not in enabled drivers build config 00:01:32.374 net/null: not in enabled drivers build config 00:01:32.374 net/octeontx: not in enabled drivers build config 00:01:32.374 net/octeon_ep: not in enabled drivers build config 00:01:32.374 net/pcap: not in enabled drivers build config 00:01:32.374 net/pfe: not in enabled drivers build config 00:01:32.374 net/qede: not in enabled drivers build config 00:01:32.374 net/ring: not in enabled drivers build config 00:01:32.374 net/sfc: not in enabled drivers build config 00:01:32.374 net/softnic: not in enabled drivers build config 00:01:32.374 net/tap: not in enabled drivers build config 00:01:32.374 net/thunderx: not in enabled drivers build config 00:01:32.374 net/txgbe: not in enabled drivers build config 00:01:32.374 net/vdev_netvsc: not in enabled drivers build config 00:01:32.374 net/vhost: not in enabled drivers build config 00:01:32.374 net/virtio: not in enabled drivers build config 00:01:32.374 net/vmxnet3: not in enabled drivers build config 00:01:32.374 raw/*: missing internal dependency, "rawdev" 00:01:32.374 crypto/armv8: not in enabled drivers build config 00:01:32.374 crypto/bcmfs: not in enabled drivers build config 00:01:32.374 crypto/caam_jr: not in enabled drivers build config 00:01:32.374 crypto/ccp: not in enabled drivers build config 00:01:32.374 crypto/cnxk: not in enabled drivers build config 00:01:32.374 crypto/dpaa_sec: not in enabled drivers build config 00:01:32.374 crypto/dpaa2_sec: not in enabled drivers build config 00:01:32.374 crypto/ipsec_mb: not in enabled drivers build config 00:01:32.374 crypto/mlx5: not in enabled drivers build config 00:01:32.374 crypto/mvsam: not in enabled drivers build config 00:01:32.374 crypto/nitrox: not in enabled drivers build config 00:01:32.374 crypto/null: not in enabled drivers build config 00:01:32.375 crypto/octeontx: not in enabled drivers build config 00:01:32.375 crypto/openssl: not in enabled drivers build config 00:01:32.375 crypto/scheduler: not in enabled drivers build config 00:01:32.375 crypto/uadk: not in enabled drivers build config 00:01:32.375 crypto/virtio: not in enabled drivers build config 00:01:32.375 compress/isal: not in enabled drivers build config 00:01:32.375 compress/mlx5: not in enabled drivers build config 00:01:32.375 compress/nitrox: not in enabled drivers build config 00:01:32.375 compress/octeontx: not in enabled drivers build config 00:01:32.375 compress/zlib: not in enabled drivers build config 00:01:32.375 regex/*: missing internal dependency, "regexdev" 00:01:32.375 ml/*: missing internal dependency, "mldev" 00:01:32.375 vdpa/ifc: not in enabled drivers build config 00:01:32.375 vdpa/mlx5: not in enabled drivers build config 00:01:32.375 vdpa/nfp: not in enabled drivers build config 00:01:32.375 vdpa/sfc: not in enabled drivers build config 00:01:32.375 event/*: missing internal dependency, "eventdev" 00:01:32.375 baseband/*: missing internal dependency, "bbdev" 00:01:32.375 gpu/*: missing internal dependency, "gpudev" 00:01:32.375 00:01:32.375 00:01:32.375 Build targets in project: 85 00:01:32.375 00:01:32.375 DPDK 24.03.0 00:01:32.375 00:01:32.375 User defined options 00:01:32.375 buildtype : debug 00:01:32.375 default_library : shared 00:01:32.375 libdir : lib 00:01:32.375 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:32.375 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:32.375 c_link_args : 00:01:32.375 cpu_instruction_set: native 00:01:32.375 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:32.375 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:32.375 enable_docs : false 00:01:32.375 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:32.375 enable_kmods : false 00:01:32.375 max_lcores : 128 00:01:32.375 tests : false 00:01:32.375 00:01:32.375 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:33.317 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:33.317 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:33.317 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:33.317 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:33.317 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:33.317 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:33.317 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:33.317 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:33.317 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:33.576 [9/268] Linking static target lib/librte_kvargs.a 00:01:33.576 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:33.576 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:33.576 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:33.576 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:33.576 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:33.576 [15/268] Linking static target lib/librte_log.a 00:01:33.576 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:34.148 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.148 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:34.148 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:34.148 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:34.148 [21/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:34.148 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:34.148 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:34.413 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:34.413 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:34.413 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:34.413 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:34.413 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:34.413 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:34.413 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:34.413 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:34.413 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:34.413 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:34.413 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:34.413 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:34.413 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:34.413 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:34.413 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:34.413 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:34.413 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:34.677 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:34.677 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:34.677 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:34.678 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:34.678 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:34.678 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:34.678 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:34.678 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:34.678 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:34.678 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:34.678 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:34.678 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:34.678 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:34.678 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:34.678 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:34.678 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:34.678 [57/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:34.678 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:34.678 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:34.678 [60/268] Linking static target lib/librte_telemetry.a 00:01:34.678 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:34.678 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:34.678 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:34.946 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:34.946 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.946 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:35.208 [67/268] Linking target lib/librte_log.so.24.1 00:01:35.208 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.208 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.208 [70/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.208 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.208 [72/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.208 [73/268] Linking static target lib/librte_pci.a 00:01:35.208 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.208 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:35.470 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:35.470 [77/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.470 [78/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.470 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.470 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.470 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.470 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.470 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.470 [84/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.470 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:35.470 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:35.470 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:35.470 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:35.470 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:35.470 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:35.470 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:35.470 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.470 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.470 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:35.470 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.470 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.470 [97/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:35.470 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:35.470 [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:35.470 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:35.733 [101/268] Linking static target lib/librte_eal.a 00:01:35.733 [102/268] Linking target lib/librte_kvargs.so.24.1 00:01:35.733 [103/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.733 [104/268] Linking static target lib/librte_ring.a 00:01:35.733 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:35.733 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.733 [107/268] Linking static target lib/librte_mempool.a 00:01:35.734 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:35.734 [109/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.734 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.734 [111/268] Linking static target lib/librte_rcu.a 00:01:35.734 [112/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.734 [113/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.734 [114/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:36.001 [115/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:36.001 [116/268] Linking static target lib/librte_meter.a 00:01:36.001 [117/268] Linking target lib/librte_telemetry.so.24.1 00:01:36.001 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:36.001 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:36.001 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.001 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.001 [122/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:36.001 [123/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:36.001 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.001 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:36.263 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.263 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:36.263 [128/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.263 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:36.263 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:36.263 [131/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:36.263 [132/268] Linking static target lib/librte_net.a 00:01:36.263 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:36.263 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:36.263 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.263 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:36.263 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:36.263 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:36.263 [139/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:36.263 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:36.263 [141/268] Linking static target lib/librte_cmdline.a 00:01:36.528 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.528 [143/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.528 [144/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:36.528 [145/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.528 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.528 [147/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:36.528 [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:36.528 [149/268] Linking static target lib/librte_timer.a 00:01:36.528 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:36.528 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:36.528 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:36.528 [153/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:36.528 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:36.792 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:36.792 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.792 [157/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:36.792 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.792 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:36.792 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.051 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:37.051 [162/268] Linking static target lib/librte_dmadev.a 00:01:37.051 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:37.051 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.051 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.051 [166/268] Linking static target lib/librte_compressdev.a 00:01:37.051 [167/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.051 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:37.051 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:37.051 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:37.051 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:37.308 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:37.308 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:37.308 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.308 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:37.308 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.308 [177/268] Linking static target lib/librte_power.a 00:01:37.308 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:37.308 [179/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.308 [180/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:37.308 [181/268] Linking static target lib/librte_hash.a 00:01:37.308 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:37.309 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.309 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:37.609 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:37.609 [186/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:37.609 [187/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.609 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:37.609 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.609 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:37.609 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:37.609 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:37.609 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.609 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:37.609 [195/268] Linking static target lib/librte_reorder.a 00:01:37.609 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.609 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:37.609 [198/268] Linking static target drivers/librte_bus_vdev.a 00:01:37.886 [199/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:37.886 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:37.886 [201/268] Linking static target lib/librte_mbuf.a 00:01:37.886 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:37.886 [203/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.886 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:37.886 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:37.886 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.886 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:37.886 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:37.886 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:37.886 [210/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.886 [211/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:37.886 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.886 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.886 [214/268] Linking static target lib/librte_security.a 00:01:38.144 [215/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:38.144 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:38.144 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.144 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:38.144 [219/268] Linking static target drivers/librte_mempool_ring.a 00:01:38.144 [220/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:38.144 [221/268] Linking static target lib/librte_cryptodev.a 00:01:38.403 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.403 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.403 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.403 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.403 [226/268] Linking static target lib/librte_ethdev.a 00:01:39.778 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.345 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:43.630 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.630 [230/268] Linking target lib/librte_eal.so.24.1 00:01:43.889 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:43.889 [232/268] Linking target lib/librte_pci.so.24.1 00:01:43.889 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:43.889 [234/268] Linking target lib/librte_dmadev.so.24.1 00:01:43.889 [235/268] Linking target lib/librte_ring.so.24.1 00:01:43.889 [236/268] Linking target lib/librte_timer.so.24.1 00:01:43.889 [237/268] Linking target lib/librte_meter.so.24.1 00:01:44.148 [238/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:44.148 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:44.148 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:44.148 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:44.148 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:44.149 [243/268] Linking target lib/librte_rcu.so.24.1 00:01:44.149 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:44.149 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:44.407 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.407 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:44.407 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:44.407 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:44.407 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:44.666 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:44.666 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:44.666 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:44.666 [254/268] Linking target lib/librte_net.so.24.1 00:01:44.666 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:44.924 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:44.924 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:44.924 [258/268] Linking target lib/librte_cmdline.so.24.1 00:01:44.924 [259/268] Linking target lib/librte_hash.so.24.1 00:01:44.924 [260/268] Linking target lib/librte_security.so.24.1 00:01:44.924 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:45.183 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:45.183 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:45.183 [264/268] Linking target lib/librte_power.so.24.1 00:01:47.717 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.717 [266/268] Linking static target lib/librte_vhost.a 00:01:49.098 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.356 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:49.356 INFO: autodetecting backend as ninja 00:01:49.356 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:50.726 CC lib/ut_mock/mock.o 00:01:50.726 CC lib/ut/ut.o 00:01:50.726 CC lib/log/log.o 00:01:50.726 CC lib/log/log_flags.o 00:01:50.726 CC lib/log/log_deprecated.o 00:01:50.984 LIB libspdk_ut.a 00:01:50.984 LIB libspdk_ut_mock.a 00:01:50.984 SO libspdk_ut.so.2.0 00:01:50.984 SO libspdk_ut_mock.so.6.0 00:01:50.984 LIB libspdk_log.a 00:01:50.984 SYMLINK libspdk_ut_mock.so 00:01:50.984 SYMLINK libspdk_ut.so 00:01:51.243 SO libspdk_log.so.7.0 00:01:51.243 SYMLINK libspdk_log.so 00:01:51.501 CC lib/ioat/ioat.o 00:01:51.501 CC lib/util/base64.o 00:01:51.501 CC lib/util/bit_array.o 00:01:51.501 CC lib/util/crc16.o 00:01:51.501 CC lib/util/cpuset.o 00:01:51.501 CC lib/util/crc32c.o 00:01:51.501 CC lib/util/crc32.o 00:01:51.501 CC lib/util/crc32_ieee.o 00:01:51.501 CC lib/util/crc64.o 00:01:51.501 CC lib/dma/dma.o 00:01:51.501 CC lib/util/dif.o 00:01:51.501 CC lib/util/fd.o 00:01:51.501 CC lib/util/file.o 00:01:51.501 CC lib/util/fd_group.o 00:01:51.501 CC lib/util/hexlify.o 00:01:51.501 CC lib/util/iov.o 00:01:51.501 CC lib/util/net.o 00:01:51.501 CC lib/util/math.o 00:01:51.501 CC lib/util/pipe.o 00:01:51.501 CXX lib/trace_parser/trace.o 00:01:51.501 CC lib/util/strerror_tls.o 00:01:51.501 CC lib/util/string.o 00:01:51.501 CC lib/util/uuid.o 00:01:51.501 CC lib/util/xor.o 00:01:51.501 CC lib/util/zipf.o 00:01:51.501 CC lib/vfio_user/host/vfio_user_pci.o 00:01:51.501 CC lib/vfio_user/host/vfio_user.o 00:01:51.760 LIB libspdk_dma.a 00:01:51.760 SO libspdk_dma.so.4.0 00:01:51.760 SYMLINK libspdk_dma.so 00:01:51.760 LIB libspdk_ioat.a 00:01:52.018 SO libspdk_ioat.so.7.0 00:01:52.018 SYMLINK libspdk_ioat.so 00:01:52.018 LIB libspdk_util.a 00:01:52.018 LIB libspdk_vfio_user.a 00:01:52.277 SO libspdk_vfio_user.so.5.0 00:01:52.277 SYMLINK libspdk_vfio_user.so 00:01:52.277 SO libspdk_util.so.10.0 00:01:52.536 SYMLINK libspdk_util.so 00:01:52.536 CC lib/conf/conf.o 00:01:52.536 CC lib/vmd/vmd.o 00:01:52.536 CC lib/vmd/led.o 00:01:52.536 CC lib/rdma_provider/common.o 00:01:52.536 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:52.536 CC lib/idxd/idxd.o 00:01:52.536 CC lib/idxd/idxd_user.o 00:01:52.536 CC lib/idxd/idxd_kernel.o 00:01:52.536 CC lib/rdma_utils/rdma_utils.o 00:01:52.536 CC lib/env_dpdk/env.o 00:01:52.536 CC lib/env_dpdk/memory.o 00:01:52.536 CC lib/env_dpdk/pci.o 00:01:52.536 CC lib/env_dpdk/init.o 00:01:52.536 CC lib/env_dpdk/threads.o 00:01:52.536 CC lib/env_dpdk/pci_ioat.o 00:01:52.536 CC lib/env_dpdk/pci_virtio.o 00:01:52.536 CC lib/env_dpdk/pci_vmd.o 00:01:52.536 CC lib/env_dpdk/pci_idxd.o 00:01:52.536 CC lib/env_dpdk/pci_event.o 00:01:52.536 CC lib/json/json_parse.o 00:01:52.536 CC lib/env_dpdk/sigbus_handler.o 00:01:52.536 CC lib/json/json_util.o 00:01:52.536 CC lib/json/json_write.o 00:01:52.536 CC lib/env_dpdk/pci_dpdk.o 00:01:52.536 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:52.536 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:52.794 LIB libspdk_rdma_provider.a 00:01:52.794 LIB libspdk_conf.a 00:01:52.794 SO libspdk_rdma_provider.so.6.0 00:01:52.794 SO libspdk_conf.so.6.0 00:01:53.053 SYMLINK libspdk_rdma_provider.so 00:01:53.053 SYMLINK libspdk_conf.so 00:01:53.053 LIB libspdk_rdma_utils.a 00:01:53.053 LIB libspdk_json.a 00:01:53.053 SO libspdk_rdma_utils.so.1.0 00:01:53.053 SO libspdk_json.so.6.0 00:01:53.053 SYMLINK libspdk_rdma_utils.so 00:01:53.053 SYMLINK libspdk_json.so 00:01:53.311 LIB libspdk_idxd.a 00:01:53.311 LIB libspdk_trace_parser.a 00:01:53.311 SO libspdk_idxd.so.12.0 00:01:53.311 SO libspdk_trace_parser.so.5.0 00:01:53.311 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.311 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.311 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.311 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.311 SYMLINK libspdk_idxd.so 00:01:53.569 SYMLINK libspdk_trace_parser.so 00:01:53.569 LIB libspdk_vmd.a 00:01:53.569 SO libspdk_vmd.so.6.0 00:01:53.827 LIB libspdk_jsonrpc.a 00:01:53.827 SYMLINK libspdk_vmd.so 00:01:53.827 SO libspdk_jsonrpc.so.6.0 00:01:53.827 SYMLINK libspdk_jsonrpc.so 00:01:54.085 CC lib/rpc/rpc.o 00:01:54.343 LIB libspdk_rpc.a 00:01:54.343 SO libspdk_rpc.so.6.0 00:01:54.343 SYMLINK libspdk_rpc.so 00:01:54.633 CC lib/notify/notify.o 00:01:54.633 CC lib/notify/notify_rpc.o 00:01:54.633 CC lib/keyring/keyring.o 00:01:54.633 CC lib/trace/trace.o 00:01:54.633 CC lib/keyring/keyring_rpc.o 00:01:54.633 CC lib/trace/trace_flags.o 00:01:54.633 CC lib/trace/trace_rpc.o 00:01:54.901 LIB libspdk_notify.a 00:01:54.901 SO libspdk_notify.so.6.0 00:01:54.901 LIB libspdk_keyring.a 00:01:54.902 SO libspdk_keyring.so.1.0 00:01:54.902 SYMLINK libspdk_notify.so 00:01:54.902 LIB libspdk_trace.a 00:01:55.158 SYMLINK libspdk_keyring.so 00:01:55.158 SO libspdk_trace.so.10.0 00:01:55.158 LIB libspdk_env_dpdk.a 00:01:55.158 SYMLINK libspdk_trace.so 00:01:55.158 SO libspdk_env_dpdk.so.15.0 00:01:55.416 CC lib/thread/thread.o 00:01:55.416 CC lib/thread/iobuf.o 00:01:55.416 CC lib/sock/sock.o 00:01:55.416 CC lib/sock/sock_rpc.o 00:01:55.416 SYMLINK libspdk_env_dpdk.so 00:01:55.983 LIB libspdk_sock.a 00:01:55.983 SO libspdk_sock.so.10.0 00:01:55.983 SYMLINK libspdk_sock.so 00:01:56.242 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:56.242 CC lib/nvme/nvme_ctrlr.o 00:01:56.242 CC lib/nvme/nvme_fabric.o 00:01:56.242 CC lib/nvme/nvme_ns_cmd.o 00:01:56.242 CC lib/nvme/nvme_ns.o 00:01:56.242 CC lib/nvme/nvme_pcie_common.o 00:01:56.242 CC lib/nvme/nvme_pcie.o 00:01:56.242 CC lib/nvme/nvme_qpair.o 00:01:56.242 CC lib/nvme/nvme.o 00:01:56.242 CC lib/nvme/nvme_quirks.o 00:01:56.242 CC lib/nvme/nvme_transport.o 00:01:56.242 CC lib/nvme/nvme_discovery.o 00:01:56.242 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:56.242 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:56.242 CC lib/nvme/nvme_tcp.o 00:01:56.242 CC lib/nvme/nvme_opal.o 00:01:56.242 CC lib/nvme/nvme_io_msg.o 00:01:56.242 CC lib/nvme/nvme_poll_group.o 00:01:56.242 CC lib/nvme/nvme_zns.o 00:01:56.242 CC lib/nvme/nvme_stubs.o 00:01:56.242 CC lib/nvme/nvme_auth.o 00:01:56.242 CC lib/nvme/nvme_cuse.o 00:01:56.242 CC lib/nvme/nvme_vfio_user.o 00:01:56.242 CC lib/nvme/nvme_rdma.o 00:01:57.177 LIB libspdk_thread.a 00:01:57.177 SO libspdk_thread.so.10.1 00:01:57.177 SYMLINK libspdk_thread.so 00:01:57.436 CC lib/virtio/virtio.o 00:01:57.436 CC lib/accel/accel.o 00:01:57.436 CC lib/accel/accel_rpc.o 00:01:57.436 CC lib/virtio/virtio_vhost_user.o 00:01:57.436 CC lib/accel/accel_sw.o 00:01:57.436 CC lib/virtio/virtio_vfio_user.o 00:01:57.436 CC lib/virtio/virtio_pci.o 00:01:57.436 CC lib/blob/blobstore.o 00:01:57.436 CC lib/vfu_tgt/tgt_endpoint.o 00:01:57.436 CC lib/vfu_tgt/tgt_rpc.o 00:01:57.436 CC lib/blob/request.o 00:01:57.436 CC lib/blob/zeroes.o 00:01:57.436 CC lib/blob/blob_bs_dev.o 00:01:57.436 CC lib/init/json_config.o 00:01:57.436 CC lib/init/subsystem.o 00:01:57.436 CC lib/init/subsystem_rpc.o 00:01:57.436 CC lib/init/rpc.o 00:01:57.694 LIB libspdk_init.a 00:01:57.694 SO libspdk_init.so.5.0 00:01:57.694 LIB libspdk_vfu_tgt.a 00:01:57.694 SO libspdk_vfu_tgt.so.3.0 00:01:57.952 LIB libspdk_virtio.a 00:01:57.952 SYMLINK libspdk_init.so 00:01:57.952 SO libspdk_virtio.so.7.0 00:01:57.952 SYMLINK libspdk_vfu_tgt.so 00:01:57.952 SYMLINK libspdk_virtio.so 00:01:57.952 CC lib/event/app.o 00:01:57.952 CC lib/event/reactor.o 00:01:57.952 CC lib/event/log_rpc.o 00:01:57.952 CC lib/event/app_rpc.o 00:01:57.952 CC lib/event/scheduler_static.o 00:01:58.519 LIB libspdk_event.a 00:01:58.776 SO libspdk_event.so.14.0 00:01:58.776 SYMLINK libspdk_event.so 00:01:59.034 LIB libspdk_accel.a 00:01:59.034 SO libspdk_accel.so.16.0 00:01:59.292 SYMLINK libspdk_accel.so 00:01:59.292 CC lib/bdev/bdev.o 00:01:59.292 CC lib/bdev/bdev_rpc.o 00:01:59.292 CC lib/bdev/bdev_zone.o 00:01:59.292 CC lib/bdev/part.o 00:01:59.292 CC lib/bdev/scsi_nvme.o 00:01:59.551 LIB libspdk_nvme.a 00:01:59.810 SO libspdk_nvme.so.13.1 00:02:00.377 SYMLINK libspdk_nvme.so 00:02:01.752 LIB libspdk_blob.a 00:02:02.011 SO libspdk_blob.so.11.0 00:02:02.011 SYMLINK libspdk_blob.so 00:02:02.270 CC lib/blobfs/blobfs.o 00:02:02.270 CC lib/blobfs/tree.o 00:02:02.270 CC lib/lvol/lvol.o 00:02:03.206 LIB libspdk_bdev.a 00:02:03.206 LIB libspdk_blobfs.a 00:02:03.206 SO libspdk_bdev.so.16.0 00:02:03.206 SO libspdk_blobfs.so.10.0 00:02:03.465 SYMLINK libspdk_bdev.so 00:02:03.465 SYMLINK libspdk_blobfs.so 00:02:03.734 CC lib/ftl/ftl_core.o 00:02:03.734 CC lib/ftl/ftl_init.o 00:02:03.734 CC lib/ftl/ftl_layout.o 00:02:03.734 CC lib/ftl/ftl_io.o 00:02:03.734 CC lib/ftl/ftl_debug.o 00:02:03.734 CC lib/ftl/ftl_sb.o 00:02:03.734 CC lib/ftl/ftl_l2p.o 00:02:03.734 CC lib/ftl/ftl_l2p_flat.o 00:02:03.734 CC lib/nvmf/ctrlr.o 00:02:03.734 CC lib/ftl/ftl_nv_cache.o 00:02:03.734 CC lib/nvmf/ctrlr_discovery.o 00:02:03.734 CC lib/scsi/dev.o 00:02:03.734 CC lib/ftl/ftl_band.o 00:02:03.734 CC lib/nvmf/ctrlr_bdev.o 00:02:03.734 CC lib/ftl/ftl_band_ops.o 00:02:03.734 CC lib/scsi/lun.o 00:02:03.734 CC lib/nvmf/subsystem.o 00:02:03.734 CC lib/ftl/ftl_writer.o 00:02:03.734 CC lib/nvmf/nvmf.o 00:02:03.734 CC lib/scsi/port.o 00:02:03.734 CC lib/ftl/ftl_rq.o 00:02:03.734 CC lib/ftl/ftl_reloc.o 00:02:03.734 CC lib/scsi/scsi.o 00:02:03.734 CC lib/scsi/scsi_bdev.o 00:02:03.734 CC lib/ftl/ftl_l2p_cache.o 00:02:03.734 CC lib/nvmf/nvmf_rpc.o 00:02:03.734 CC lib/nvmf/transport.o 00:02:03.734 CC lib/ftl/ftl_p2l.o 00:02:03.734 CC lib/scsi/scsi_pr.o 00:02:03.734 CC lib/ftl/mngt/ftl_mngt.o 00:02:03.734 CC lib/nvmf/tcp.o 00:02:03.734 CC lib/scsi/scsi_rpc.o 00:02:03.734 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:03.734 CC lib/nvmf/stubs.o 00:02:03.734 CC lib/nvmf/mdns_server.o 00:02:03.734 CC lib/scsi/task.o 00:02:03.734 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:03.734 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:03.734 CC lib/nvmf/vfio_user.o 00:02:03.734 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:03.734 CC lib/nvmf/rdma.o 00:02:03.734 CC lib/nvmf/auth.o 00:02:03.734 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:03.734 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:03.735 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:03.735 CC lib/nbd/nbd.o 00:02:03.735 CC lib/ublk/ublk.o 00:02:03.993 CC lib/nbd/nbd_rpc.o 00:02:03.993 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:03.993 CC lib/ublk/ublk_rpc.o 00:02:03.993 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:03.993 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:03.993 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:04.253 LIB libspdk_lvol.a 00:02:04.253 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:04.253 CC lib/ftl/utils/ftl_conf.o 00:02:04.253 SO libspdk_lvol.so.10.0 00:02:04.253 CC lib/ftl/utils/ftl_md.o 00:02:04.253 CC lib/ftl/utils/ftl_mempool.o 00:02:04.253 CC lib/ftl/utils/ftl_bitmap.o 00:02:04.253 CC lib/ftl/utils/ftl_property.o 00:02:04.253 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:04.253 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:04.253 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:04.253 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:04.253 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:04.253 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:04.253 SYMLINK libspdk_lvol.so 00:02:04.253 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:04.253 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:04.253 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:04.253 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:04.253 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:04.253 CC lib/ftl/base/ftl_base_dev.o 00:02:04.253 CC lib/ftl/base/ftl_base_bdev.o 00:02:04.513 CC lib/ftl/ftl_trace.o 00:02:04.513 LIB libspdk_nbd.a 00:02:04.513 SO libspdk_nbd.so.7.0 00:02:04.771 SYMLINK libspdk_nbd.so 00:02:04.771 LIB libspdk_scsi.a 00:02:04.771 SO libspdk_scsi.so.9.0 00:02:05.039 SYMLINK libspdk_scsi.so 00:02:05.039 LIB libspdk_ublk.a 00:02:05.039 SO libspdk_ublk.so.3.0 00:02:05.039 SYMLINK libspdk_ublk.so 00:02:05.039 CC lib/iscsi/conn.o 00:02:05.039 CC lib/iscsi/init_grp.o 00:02:05.039 CC lib/iscsi/iscsi.o 00:02:05.039 CC lib/iscsi/param.o 00:02:05.039 CC lib/iscsi/md5.o 00:02:05.039 CC lib/iscsi/portal_grp.o 00:02:05.039 CC lib/iscsi/tgt_node.o 00:02:05.039 CC lib/iscsi/iscsi_subsystem.o 00:02:05.039 CC lib/iscsi/iscsi_rpc.o 00:02:05.039 CC lib/iscsi/task.o 00:02:05.297 CC lib/vhost/vhost.o 00:02:05.297 CC lib/vhost/vhost_rpc.o 00:02:05.297 CC lib/vhost/vhost_scsi.o 00:02:05.297 CC lib/vhost/vhost_blk.o 00:02:05.297 CC lib/vhost/rte_vhost_user.o 00:02:05.556 LIB libspdk_ftl.a 00:02:05.556 SO libspdk_ftl.so.9.0 00:02:06.125 SYMLINK libspdk_ftl.so 00:02:07.501 LIB libspdk_vhost.a 00:02:07.501 SO libspdk_vhost.so.8.0 00:02:07.501 LIB libspdk_iscsi.a 00:02:07.501 SO libspdk_iscsi.so.8.0 00:02:07.501 SYMLINK libspdk_vhost.so 00:02:07.807 LIB libspdk_nvmf.a 00:02:07.807 SO libspdk_nvmf.so.19.0 00:02:07.807 SYMLINK libspdk_iscsi.so 00:02:08.064 SYMLINK libspdk_nvmf.so 00:02:08.322 CC module/vfu_device/vfu_virtio.o 00:02:08.322 CC module/vfu_device/vfu_virtio_blk.o 00:02:08.322 CC module/vfu_device/vfu_virtio_scsi.o 00:02:08.322 CC module/vfu_device/vfu_virtio_rpc.o 00:02:08.581 CC module/env_dpdk/env_dpdk_rpc.o 00:02:08.581 CC module/sock/posix/posix.o 00:02:08.581 CC module/scheduler/gscheduler/gscheduler.o 00:02:08.581 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:08.581 CC module/accel/ioat/accel_ioat.o 00:02:08.581 CC module/accel/error/accel_error.o 00:02:08.581 CC module/accel/error/accel_error_rpc.o 00:02:08.581 CC module/accel/ioat/accel_ioat_rpc.o 00:02:08.581 CC module/accel/dsa/accel_dsa.o 00:02:08.581 CC module/accel/dsa/accel_dsa_rpc.o 00:02:08.581 CC module/keyring/linux/keyring.o 00:02:08.581 CC module/keyring/linux/keyring_rpc.o 00:02:08.581 CC module/blob/bdev/blob_bdev.o 00:02:08.581 CC module/accel/iaa/accel_iaa.o 00:02:08.581 CC module/accel/iaa/accel_iaa_rpc.o 00:02:08.581 CC module/keyring/file/keyring.o 00:02:08.581 CC module/keyring/file/keyring_rpc.o 00:02:08.581 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:08.581 LIB libspdk_env_dpdk_rpc.a 00:02:08.581 SO libspdk_env_dpdk_rpc.so.6.0 00:02:08.581 SYMLINK libspdk_env_dpdk_rpc.so 00:02:08.581 LIB libspdk_scheduler_gscheduler.a 00:02:08.840 SO libspdk_scheduler_gscheduler.so.4.0 00:02:08.840 LIB libspdk_scheduler_dpdk_governor.a 00:02:08.840 LIB libspdk_accel_ioat.a 00:02:08.840 LIB libspdk_keyring_linux.a 00:02:08.840 LIB libspdk_keyring_file.a 00:02:08.840 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:08.840 SYMLINK libspdk_scheduler_gscheduler.so 00:02:08.840 SO libspdk_keyring_file.so.1.0 00:02:08.840 SO libspdk_accel_ioat.so.6.0 00:02:08.840 SO libspdk_keyring_linux.so.1.0 00:02:08.840 LIB libspdk_accel_error.a 00:02:08.840 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:08.840 LIB libspdk_scheduler_dynamic.a 00:02:08.840 SO libspdk_accel_error.so.2.0 00:02:08.840 LIB libspdk_accel_iaa.a 00:02:08.840 SYMLINK libspdk_keyring_file.so 00:02:08.840 LIB libspdk_accel_dsa.a 00:02:08.840 SO libspdk_scheduler_dynamic.so.4.0 00:02:08.840 SYMLINK libspdk_keyring_linux.so 00:02:08.840 SO libspdk_accel_iaa.so.3.0 00:02:08.840 SYMLINK libspdk_accel_ioat.so 00:02:08.840 SO libspdk_accel_dsa.so.5.0 00:02:08.840 SYMLINK libspdk_accel_error.so 00:02:08.840 SYMLINK libspdk_scheduler_dynamic.so 00:02:08.840 SYMLINK libspdk_accel_iaa.so 00:02:08.840 SYMLINK libspdk_accel_dsa.so 00:02:08.840 LIB libspdk_blob_bdev.a 00:02:08.840 SO libspdk_blob_bdev.so.11.0 00:02:09.098 SYMLINK libspdk_blob_bdev.so 00:02:09.360 CC module/bdev/nvme/bdev_nvme.o 00:02:09.360 CC module/bdev/error/vbdev_error.o 00:02:09.360 CC module/bdev/error/vbdev_error_rpc.o 00:02:09.360 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:09.360 CC module/bdev/nvme/nvme_rpc.o 00:02:09.360 CC module/bdev/nvme/bdev_mdns_client.o 00:02:09.360 CC module/bdev/aio/bdev_aio.o 00:02:09.360 CC module/bdev/nvme/vbdev_opal.o 00:02:09.360 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:09.360 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:09.360 CC module/bdev/aio/bdev_aio_rpc.o 00:02:09.360 CC module/bdev/delay/vbdev_delay.o 00:02:09.360 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:09.360 CC module/bdev/gpt/gpt.o 00:02:09.360 CC module/bdev/split/vbdev_split.o 00:02:09.360 CC module/bdev/split/vbdev_split_rpc.o 00:02:09.360 CC module/bdev/gpt/vbdev_gpt.o 00:02:09.360 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:09.360 CC module/blobfs/bdev/blobfs_bdev.o 00:02:09.360 CC module/bdev/null/bdev_null.o 00:02:09.360 CC module/bdev/malloc/bdev_malloc.o 00:02:09.360 CC module/bdev/passthru/vbdev_passthru.o 00:02:09.360 CC module/bdev/null/bdev_null_rpc.o 00:02:09.360 CC module/bdev/raid/bdev_raid.o 00:02:09.360 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:09.360 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:09.360 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:09.360 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:09.360 CC module/bdev/raid/bdev_raid_rpc.o 00:02:09.360 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:09.360 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:09.360 CC module/bdev/lvol/vbdev_lvol.o 00:02:09.360 CC module/bdev/raid/bdev_raid_sb.o 00:02:09.360 CC module/bdev/iscsi/bdev_iscsi.o 00:02:09.360 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:09.360 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:09.360 LIB libspdk_sock_posix.a 00:02:09.360 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:09.360 CC module/bdev/raid/raid0.o 00:02:09.360 CC module/bdev/raid/raid1.o 00:02:09.360 CC module/bdev/raid/concat.o 00:02:09.360 CC module/bdev/ftl/bdev_ftl.o 00:02:09.360 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:09.360 SO libspdk_sock_posix.so.6.0 00:02:09.360 LIB libspdk_vfu_device.a 00:02:09.360 SO libspdk_vfu_device.so.3.0 00:02:09.618 SYMLINK libspdk_sock_posix.so 00:02:09.618 SYMLINK libspdk_vfu_device.so 00:02:09.875 LIB libspdk_bdev_error.a 00:02:09.875 LIB libspdk_blobfs_bdev.a 00:02:09.875 LIB libspdk_bdev_iscsi.a 00:02:09.875 SO libspdk_bdev_error.so.6.0 00:02:09.875 SO libspdk_blobfs_bdev.so.6.0 00:02:09.875 SO libspdk_bdev_iscsi.so.6.0 00:02:09.876 LIB libspdk_bdev_split.a 00:02:09.876 LIB libspdk_bdev_gpt.a 00:02:09.876 LIB libspdk_bdev_null.a 00:02:09.876 LIB libspdk_bdev_zone_block.a 00:02:09.876 LIB libspdk_bdev_ftl.a 00:02:09.876 SO libspdk_bdev_split.so.6.0 00:02:09.876 SO libspdk_bdev_gpt.so.6.0 00:02:09.876 SYMLINK libspdk_bdev_error.so 00:02:09.876 SYMLINK libspdk_blobfs_bdev.so 00:02:09.876 SO libspdk_bdev_ftl.so.6.0 00:02:09.876 SO libspdk_bdev_zone_block.so.6.0 00:02:09.876 LIB libspdk_bdev_passthru.a 00:02:09.876 SO libspdk_bdev_null.so.6.0 00:02:09.876 SYMLINK libspdk_bdev_iscsi.so 00:02:09.876 LIB libspdk_bdev_aio.a 00:02:09.876 SO libspdk_bdev_passthru.so.6.0 00:02:09.876 SYMLINK libspdk_bdev_gpt.so 00:02:09.876 SYMLINK libspdk_bdev_split.so 00:02:09.876 SO libspdk_bdev_aio.so.6.0 00:02:09.876 SYMLINK libspdk_bdev_zone_block.so 00:02:09.876 SYMLINK libspdk_bdev_null.so 00:02:09.876 SYMLINK libspdk_bdev_ftl.so 00:02:09.876 LIB libspdk_bdev_delay.a 00:02:09.876 SO libspdk_bdev_delay.so.6.0 00:02:09.876 SYMLINK libspdk_bdev_passthru.so 00:02:10.134 SYMLINK libspdk_bdev_aio.so 00:02:10.134 LIB libspdk_bdev_malloc.a 00:02:10.134 LIB libspdk_bdev_lvol.a 00:02:10.134 SYMLINK libspdk_bdev_delay.so 00:02:10.134 SO libspdk_bdev_lvol.so.6.0 00:02:10.134 SO libspdk_bdev_malloc.so.6.0 00:02:10.134 SYMLINK libspdk_bdev_lvol.so 00:02:10.134 SYMLINK libspdk_bdev_malloc.so 00:02:10.134 LIB libspdk_bdev_virtio.a 00:02:10.134 SO libspdk_bdev_virtio.so.6.0 00:02:10.394 SYMLINK libspdk_bdev_virtio.so 00:02:11.779 LIB libspdk_bdev_raid.a 00:02:11.779 SO libspdk_bdev_raid.so.6.0 00:02:11.779 SYMLINK libspdk_bdev_raid.so 00:02:15.065 LIB libspdk_bdev_nvme.a 00:02:15.065 SO libspdk_bdev_nvme.so.7.0 00:02:15.065 SYMLINK libspdk_bdev_nvme.so 00:02:15.328 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:15.328 CC module/event/subsystems/sock/sock.o 00:02:15.328 CC module/event/subsystems/keyring/keyring.o 00:02:15.328 CC module/event/subsystems/vmd/vmd.o 00:02:15.328 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:15.328 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:15.328 CC module/event/subsystems/iobuf/iobuf.o 00:02:15.328 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:15.328 CC module/event/subsystems/scheduler/scheduler.o 00:02:15.589 LIB libspdk_event_vmd.a 00:02:15.589 LIB libspdk_event_vhost_blk.a 00:02:15.589 LIB libspdk_event_scheduler.a 00:02:15.589 SO libspdk_event_vmd.so.6.0 00:02:15.589 SO libspdk_event_vhost_blk.so.3.0 00:02:15.589 SO libspdk_event_scheduler.so.4.0 00:02:15.589 LIB libspdk_event_keyring.a 00:02:15.589 SYMLINK libspdk_event_vhost_blk.so 00:02:15.589 SYMLINK libspdk_event_vmd.so 00:02:15.589 LIB libspdk_event_sock.a 00:02:15.589 LIB libspdk_event_vfu_tgt.a 00:02:15.589 SYMLINK libspdk_event_scheduler.so 00:02:15.589 SO libspdk_event_keyring.so.1.0 00:02:15.589 SO libspdk_event_sock.so.5.0 00:02:15.589 SO libspdk_event_vfu_tgt.so.3.0 00:02:15.589 LIB libspdk_event_iobuf.a 00:02:15.589 SYMLINK libspdk_event_keyring.so 00:02:15.589 SO libspdk_event_iobuf.so.3.0 00:02:15.589 SYMLINK libspdk_event_sock.so 00:02:15.589 SYMLINK libspdk_event_vfu_tgt.so 00:02:15.848 SYMLINK libspdk_event_iobuf.so 00:02:16.106 CC module/event/subsystems/accel/accel.o 00:02:16.365 LIB libspdk_event_accel.a 00:02:16.365 SO libspdk_event_accel.so.6.0 00:02:16.365 SYMLINK libspdk_event_accel.so 00:02:16.623 CC module/event/subsystems/bdev/bdev.o 00:02:17.189 LIB libspdk_event_bdev.a 00:02:17.189 SO libspdk_event_bdev.so.6.0 00:02:17.189 SYMLINK libspdk_event_bdev.so 00:02:17.447 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:17.447 CC module/event/subsystems/scsi/scsi.o 00:02:17.447 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:17.447 CC module/event/subsystems/ublk/ublk.o 00:02:17.447 CC module/event/subsystems/nbd/nbd.o 00:02:17.707 LIB libspdk_event_scsi.a 00:02:17.707 LIB libspdk_event_nbd.a 00:02:17.707 LIB libspdk_event_ublk.a 00:02:17.707 SO libspdk_event_nbd.so.6.0 00:02:17.707 SO libspdk_event_scsi.so.6.0 00:02:17.707 SO libspdk_event_ublk.so.3.0 00:02:17.707 SYMLINK libspdk_event_nbd.so 00:02:17.707 LIB libspdk_event_nvmf.a 00:02:17.707 SYMLINK libspdk_event_scsi.so 00:02:17.707 SYMLINK libspdk_event_ublk.so 00:02:17.707 SO libspdk_event_nvmf.so.6.0 00:02:17.707 SYMLINK libspdk_event_nvmf.so 00:02:17.964 CC module/event/subsystems/iscsi/iscsi.o 00:02:17.964 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:18.223 LIB libspdk_event_iscsi.a 00:02:18.223 SO libspdk_event_iscsi.so.6.0 00:02:18.223 SYMLINK libspdk_event_iscsi.so 00:02:18.223 LIB libspdk_event_vhost_scsi.a 00:02:18.223 SO libspdk_event_vhost_scsi.so.3.0 00:02:18.481 SYMLINK libspdk_event_vhost_scsi.so 00:02:18.481 SO libspdk.so.6.0 00:02:18.481 SYMLINK libspdk.so 00:02:18.747 CXX app/trace/trace.o 00:02:18.747 CC app/spdk_nvme_identify/identify.o 00:02:18.747 CC app/spdk_nvme_perf/perf.o 00:02:18.747 TEST_HEADER include/spdk/accel.h 00:02:18.747 TEST_HEADER include/spdk/accel_module.h 00:02:18.747 TEST_HEADER include/spdk/assert.h 00:02:18.747 TEST_HEADER include/spdk/barrier.h 00:02:18.747 TEST_HEADER include/spdk/base64.h 00:02:18.747 TEST_HEADER include/spdk/bdev.h 00:02:18.747 TEST_HEADER include/spdk/bdev_module.h 00:02:18.747 TEST_HEADER include/spdk/bit_array.h 00:02:18.747 TEST_HEADER include/spdk/bdev_zone.h 00:02:18.747 CC app/trace_record/trace_record.o 00:02:18.747 TEST_HEADER include/spdk/bit_pool.h 00:02:18.747 TEST_HEADER include/spdk/blob_bdev.h 00:02:18.747 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:18.747 CC app/spdk_lspci/spdk_lspci.o 00:02:18.747 TEST_HEADER include/spdk/blobfs.h 00:02:18.747 TEST_HEADER include/spdk/blob.h 00:02:18.747 CC app/spdk_nvme_discover/discovery_aer.o 00:02:18.747 TEST_HEADER include/spdk/conf.h 00:02:18.747 CC app/spdk_top/spdk_top.o 00:02:18.747 TEST_HEADER include/spdk/config.h 00:02:18.747 TEST_HEADER include/spdk/cpuset.h 00:02:18.747 TEST_HEADER include/spdk/crc16.h 00:02:18.747 TEST_HEADER include/spdk/crc32.h 00:02:18.747 CC test/rpc_client/rpc_client_test.o 00:02:18.747 TEST_HEADER include/spdk/dif.h 00:02:18.747 TEST_HEADER include/spdk/crc64.h 00:02:18.747 TEST_HEADER include/spdk/dma.h 00:02:18.747 TEST_HEADER include/spdk/endian.h 00:02:18.747 TEST_HEADER include/spdk/env_dpdk.h 00:02:18.747 TEST_HEADER include/spdk/env.h 00:02:18.747 TEST_HEADER include/spdk/event.h 00:02:18.747 TEST_HEADER include/spdk/fd_group.h 00:02:18.747 TEST_HEADER include/spdk/fd.h 00:02:18.747 TEST_HEADER include/spdk/file.h 00:02:18.747 TEST_HEADER include/spdk/ftl.h 00:02:18.747 TEST_HEADER include/spdk/gpt_spec.h 00:02:18.747 TEST_HEADER include/spdk/hexlify.h 00:02:18.747 TEST_HEADER include/spdk/histogram_data.h 00:02:18.747 TEST_HEADER include/spdk/idxd.h 00:02:18.747 TEST_HEADER include/spdk/idxd_spec.h 00:02:18.747 TEST_HEADER include/spdk/init.h 00:02:18.747 TEST_HEADER include/spdk/ioat.h 00:02:18.747 TEST_HEADER include/spdk/ioat_spec.h 00:02:18.747 TEST_HEADER include/spdk/iscsi_spec.h 00:02:18.747 TEST_HEADER include/spdk/json.h 00:02:18.747 TEST_HEADER include/spdk/jsonrpc.h 00:02:18.747 TEST_HEADER include/spdk/keyring.h 00:02:18.747 TEST_HEADER include/spdk/keyring_module.h 00:02:18.747 TEST_HEADER include/spdk/likely.h 00:02:18.747 TEST_HEADER include/spdk/log.h 00:02:18.747 TEST_HEADER include/spdk/memory.h 00:02:18.747 TEST_HEADER include/spdk/lvol.h 00:02:18.747 TEST_HEADER include/spdk/mmio.h 00:02:18.747 TEST_HEADER include/spdk/nbd.h 00:02:18.747 TEST_HEADER include/spdk/net.h 00:02:18.747 TEST_HEADER include/spdk/notify.h 00:02:18.747 TEST_HEADER include/spdk/nvme.h 00:02:18.747 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:18.747 TEST_HEADER include/spdk/nvme_intel.h 00:02:18.747 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:18.747 TEST_HEADER include/spdk/nvme_spec.h 00:02:18.747 TEST_HEADER include/spdk/nvme_zns.h 00:02:18.747 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:18.747 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:18.747 TEST_HEADER include/spdk/nvmf.h 00:02:18.747 TEST_HEADER include/spdk/nvmf_spec.h 00:02:18.747 TEST_HEADER include/spdk/nvmf_transport.h 00:02:18.747 TEST_HEADER include/spdk/opal.h 00:02:18.747 TEST_HEADER include/spdk/pci_ids.h 00:02:18.747 TEST_HEADER include/spdk/opal_spec.h 00:02:18.747 TEST_HEADER include/spdk/pipe.h 00:02:18.747 TEST_HEADER include/spdk/reduce.h 00:02:18.747 TEST_HEADER include/spdk/queue.h 00:02:18.747 TEST_HEADER include/spdk/rpc.h 00:02:18.747 TEST_HEADER include/spdk/scheduler.h 00:02:18.747 TEST_HEADER include/spdk/scsi.h 00:02:18.747 TEST_HEADER include/spdk/scsi_spec.h 00:02:18.747 TEST_HEADER include/spdk/sock.h 00:02:18.747 TEST_HEADER include/spdk/stdinc.h 00:02:18.747 TEST_HEADER include/spdk/string.h 00:02:18.747 TEST_HEADER include/spdk/thread.h 00:02:18.747 TEST_HEADER include/spdk/trace.h 00:02:18.747 CC app/spdk_dd/spdk_dd.o 00:02:18.747 TEST_HEADER include/spdk/trace_parser.h 00:02:18.747 TEST_HEADER include/spdk/tree.h 00:02:18.747 TEST_HEADER include/spdk/ublk.h 00:02:18.747 TEST_HEADER include/spdk/util.h 00:02:18.747 TEST_HEADER include/spdk/uuid.h 00:02:18.747 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:18.747 TEST_HEADER include/spdk/version.h 00:02:18.747 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:18.747 TEST_HEADER include/spdk/vhost.h 00:02:18.747 TEST_HEADER include/spdk/xor.h 00:02:18.747 TEST_HEADER include/spdk/vmd.h 00:02:18.747 TEST_HEADER include/spdk/zipf.h 00:02:18.747 CXX test/cpp_headers/accel.o 00:02:18.747 CXX test/cpp_headers/accel_module.o 00:02:18.747 CXX test/cpp_headers/assert.o 00:02:18.747 CXX test/cpp_headers/barrier.o 00:02:18.747 CXX test/cpp_headers/base64.o 00:02:18.747 CXX test/cpp_headers/bdev.o 00:02:18.747 CXX test/cpp_headers/bdev_module.o 00:02:18.747 CXX test/cpp_headers/bdev_zone.o 00:02:18.747 CXX test/cpp_headers/bit_array.o 00:02:18.747 CXX test/cpp_headers/bit_pool.o 00:02:18.747 CXX test/cpp_headers/blob_bdev.o 00:02:18.747 CXX test/cpp_headers/blobfs_bdev.o 00:02:18.747 CXX test/cpp_headers/blobfs.o 00:02:18.747 CXX test/cpp_headers/blob.o 00:02:18.747 CXX test/cpp_headers/conf.o 00:02:18.747 CXX test/cpp_headers/config.o 00:02:18.747 CC app/nvmf_tgt/nvmf_main.o 00:02:18.747 CXX test/cpp_headers/cpuset.o 00:02:18.747 CXX test/cpp_headers/crc16.o 00:02:18.747 CC app/iscsi_tgt/iscsi_tgt.o 00:02:18.747 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:18.747 CC app/spdk_tgt/spdk_tgt.o 00:02:18.747 CXX test/cpp_headers/crc32.o 00:02:18.747 CC test/app/histogram_perf/histogram_perf.o 00:02:18.747 CC test/env/memory/memory_ut.o 00:02:18.747 CC examples/util/zipf/zipf.o 00:02:18.747 CC test/app/stub/stub.o 00:02:19.008 CC test/env/pci/pci_ut.o 00:02:19.008 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:19.008 CC test/env/vtophys/vtophys.o 00:02:19.008 CC test/thread/poller_perf/poller_perf.o 00:02:19.008 CC examples/ioat/perf/perf.o 00:02:19.008 CC test/app/jsoncat/jsoncat.o 00:02:19.008 CC examples/ioat/verify/verify.o 00:02:19.008 CC app/fio/nvme/fio_plugin.o 00:02:19.008 CC test/dma/test_dma/test_dma.o 00:02:19.008 CC test/app/bdev_svc/bdev_svc.o 00:02:19.008 CC app/fio/bdev/fio_plugin.o 00:02:19.008 LINK spdk_lspci 00:02:19.008 CC test/env/mem_callbacks/mem_callbacks.o 00:02:19.008 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:19.272 LINK rpc_client_test 00:02:19.272 LINK spdk_trace_record 00:02:19.272 LINK spdk_nvme_discover 00:02:19.273 LINK zipf 00:02:19.273 CXX test/cpp_headers/crc64.o 00:02:19.273 LINK jsoncat 00:02:19.273 LINK env_dpdk_post_init 00:02:19.273 CXX test/cpp_headers/dif.o 00:02:19.273 LINK histogram_perf 00:02:19.273 LINK vtophys 00:02:19.273 CXX test/cpp_headers/dma.o 00:02:19.273 LINK interrupt_tgt 00:02:19.273 CXX test/cpp_headers/endian.o 00:02:19.273 LINK nvmf_tgt 00:02:19.273 CXX test/cpp_headers/env_dpdk.o 00:02:19.273 LINK poller_perf 00:02:19.273 CXX test/cpp_headers/env.o 00:02:19.273 CXX test/cpp_headers/event.o 00:02:19.273 CXX test/cpp_headers/fd_group.o 00:02:19.273 CXX test/cpp_headers/fd.o 00:02:19.273 CXX test/cpp_headers/file.o 00:02:19.273 CXX test/cpp_headers/ftl.o 00:02:19.273 CXX test/cpp_headers/gpt_spec.o 00:02:19.273 CXX test/cpp_headers/hexlify.o 00:02:19.273 LINK stub 00:02:19.273 CXX test/cpp_headers/histogram_data.o 00:02:19.273 CXX test/cpp_headers/idxd.o 00:02:19.273 CXX test/cpp_headers/idxd_spec.o 00:02:19.273 CXX test/cpp_headers/init.o 00:02:19.273 LINK iscsi_tgt 00:02:19.533 LINK spdk_tgt 00:02:19.533 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:19.533 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:19.533 LINK ioat_perf 00:02:19.533 LINK verify 00:02:19.533 CXX test/cpp_headers/ioat.o 00:02:19.533 LINK bdev_svc 00:02:19.533 LINK spdk_trace 00:02:19.533 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:19.533 CXX test/cpp_headers/ioat_spec.o 00:02:19.533 CXX test/cpp_headers/iscsi_spec.o 00:02:19.533 CXX test/cpp_headers/json.o 00:02:19.796 CXX test/cpp_headers/jsonrpc.o 00:02:19.796 LINK spdk_dd 00:02:19.796 CXX test/cpp_headers/keyring.o 00:02:19.796 CXX test/cpp_headers/keyring_module.o 00:02:19.796 CXX test/cpp_headers/likely.o 00:02:19.796 CXX test/cpp_headers/log.o 00:02:19.796 CXX test/cpp_headers/lvol.o 00:02:19.796 CXX test/cpp_headers/memory.o 00:02:19.796 CXX test/cpp_headers/mmio.o 00:02:19.796 CXX test/cpp_headers/nbd.o 00:02:19.796 CXX test/cpp_headers/net.o 00:02:19.796 CXX test/cpp_headers/notify.o 00:02:19.796 CXX test/cpp_headers/nvme.o 00:02:19.796 CXX test/cpp_headers/nvme_intel.o 00:02:19.796 CXX test/cpp_headers/nvme_ocssd.o 00:02:19.796 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:19.796 CXX test/cpp_headers/nvme_spec.o 00:02:19.796 CXX test/cpp_headers/nvme_zns.o 00:02:19.796 CXX test/cpp_headers/nvmf_cmd.o 00:02:19.796 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:19.796 CXX test/cpp_headers/nvmf.o 00:02:19.796 CXX test/cpp_headers/nvmf_spec.o 00:02:19.796 CXX test/cpp_headers/nvmf_transport.o 00:02:19.796 CXX test/cpp_headers/opal.o 00:02:19.796 LINK pci_ut 00:02:19.796 CXX test/cpp_headers/opal_spec.o 00:02:20.060 CXX test/cpp_headers/pci_ids.o 00:02:20.060 LINK test_dma 00:02:20.060 CC examples/sock/hello_world/hello_sock.o 00:02:20.060 CC examples/vmd/lsvmd/lsvmd.o 00:02:20.060 CC examples/vmd/led/led.o 00:02:20.060 CC examples/idxd/perf/perf.o 00:02:20.060 LINK spdk_nvme 00:02:20.060 CC examples/thread/thread/thread_ex.o 00:02:20.060 CXX test/cpp_headers/pipe.o 00:02:20.060 CC test/event/event_perf/event_perf.o 00:02:20.060 CXX test/cpp_headers/queue.o 00:02:20.060 CXX test/cpp_headers/reduce.o 00:02:20.060 LINK nvme_fuzz 00:02:20.060 CXX test/cpp_headers/rpc.o 00:02:20.060 LINK spdk_bdev 00:02:20.060 CXX test/cpp_headers/scheduler.o 00:02:20.060 CC test/event/reactor/reactor.o 00:02:20.060 CXX test/cpp_headers/scsi.o 00:02:20.060 CXX test/cpp_headers/scsi_spec.o 00:02:20.060 CXX test/cpp_headers/sock.o 00:02:20.060 CXX test/cpp_headers/stdinc.o 00:02:20.060 CC test/event/reactor_perf/reactor_perf.o 00:02:20.319 CXX test/cpp_headers/string.o 00:02:20.319 CXX test/cpp_headers/thread.o 00:02:20.319 CC test/event/app_repeat/app_repeat.o 00:02:20.319 CXX test/cpp_headers/trace.o 00:02:20.319 CC app/vhost/vhost.o 00:02:20.319 CXX test/cpp_headers/trace_parser.o 00:02:20.319 CXX test/cpp_headers/tree.o 00:02:20.319 CXX test/cpp_headers/ublk.o 00:02:20.319 CXX test/cpp_headers/util.o 00:02:20.319 LINK mem_callbacks 00:02:20.319 LINK lsvmd 00:02:20.319 CXX test/cpp_headers/uuid.o 00:02:20.319 LINK spdk_nvme_identify 00:02:20.319 CC test/event/scheduler/scheduler.o 00:02:20.319 CXX test/cpp_headers/version.o 00:02:20.319 CXX test/cpp_headers/vfio_user_pci.o 00:02:20.319 CXX test/cpp_headers/vfio_user_spec.o 00:02:20.319 CXX test/cpp_headers/vhost.o 00:02:20.319 CXX test/cpp_headers/vmd.o 00:02:20.319 CXX test/cpp_headers/xor.o 00:02:20.319 CXX test/cpp_headers/zipf.o 00:02:20.319 LINK vhost_fuzz 00:02:20.319 LINK event_perf 00:02:20.319 LINK led 00:02:20.319 LINK spdk_top 00:02:20.319 LINK reactor 00:02:20.578 LINK hello_sock 00:02:20.578 LINK app_repeat 00:02:20.578 LINK reactor_perf 00:02:20.578 LINK thread 00:02:20.578 LINK vhost 00:02:20.578 LINK spdk_nvme_perf 00:02:20.836 CC test/nvme/err_injection/err_injection.o 00:02:20.836 CC test/nvme/startup/startup.o 00:02:20.836 CC test/nvme/fdp/fdp.o 00:02:20.836 CC test/nvme/reserve/reserve.o 00:02:20.836 CC test/nvme/compliance/nvme_compliance.o 00:02:20.836 CC test/nvme/fused_ordering/fused_ordering.o 00:02:20.836 CC test/nvme/reset/reset.o 00:02:20.836 CC test/nvme/sgl/sgl.o 00:02:20.836 CC test/nvme/simple_copy/simple_copy.o 00:02:20.836 CC test/nvme/boot_partition/boot_partition.o 00:02:20.836 CC test/nvme/overhead/overhead.o 00:02:20.836 CC test/nvme/cuse/cuse.o 00:02:20.836 CC test/nvme/e2edp/nvme_dp.o 00:02:20.836 CC test/nvme/aer/aer.o 00:02:20.836 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:20.836 LINK scheduler 00:02:20.836 CC test/nvme/connect_stress/connect_stress.o 00:02:20.836 LINK idxd_perf 00:02:20.836 CC test/blobfs/mkfs/mkfs.o 00:02:20.836 CC test/lvol/esnap/esnap.o 00:02:20.836 CC test/accel/dif/dif.o 00:02:21.096 LINK startup 00:02:21.096 CC examples/nvme/reconnect/reconnect.o 00:02:21.096 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:21.096 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:21.096 CC examples/nvme/hotplug/hotplug.o 00:02:21.096 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:21.096 CC examples/nvme/arbitration/arbitration.o 00:02:21.096 CC examples/nvme/hello_world/hello_world.o 00:02:21.096 CC examples/nvme/abort/abort.o 00:02:21.096 LINK reserve 00:02:21.096 LINK boot_partition 00:02:21.096 LINK fused_ordering 00:02:21.096 CC examples/accel/perf/accel_perf.o 00:02:21.096 LINK sgl 00:02:21.096 LINK reset 00:02:21.096 LINK doorbell_aers 00:02:21.096 LINK connect_stress 00:02:21.096 LINK err_injection 00:02:21.096 LINK simple_copy 00:02:21.096 CC examples/blob/hello_world/hello_blob.o 00:02:21.096 CC examples/blob/cli/blobcli.o 00:02:21.096 LINK mkfs 00:02:21.355 LINK aer 00:02:21.355 LINK nvme_dp 00:02:21.355 LINK overhead 00:02:21.355 LINK cmb_copy 00:02:21.355 LINK pmr_persistence 00:02:21.355 LINK fdp 00:02:21.355 LINK nvme_compliance 00:02:21.355 LINK hello_world 00:02:21.355 LINK memory_ut 00:02:21.355 LINK reconnect 00:02:21.613 LINK dif 00:02:21.613 LINK hotplug 00:02:21.613 LINK arbitration 00:02:21.613 LINK abort 00:02:21.613 LINK hello_blob 00:02:21.872 LINK accel_perf 00:02:21.872 LINK nvme_manage 00:02:21.872 LINK blobcli 00:02:22.129 CC test/bdev/bdevio/bdevio.o 00:02:22.390 LINK iscsi_fuzz 00:02:22.390 LINK cuse 00:02:22.390 CC examples/bdev/hello_world/hello_bdev.o 00:02:22.390 CC examples/bdev/bdevperf/bdevperf.o 00:02:22.390 LINK bdevio 00:02:22.982 LINK hello_bdev 00:02:24.355 LINK bdevperf 00:02:24.922 CC examples/nvmf/nvmf/nvmf.o 00:02:25.490 LINK nvmf 00:02:30.759 LINK esnap 00:02:31.327 00:02:31.327 real 1m10.499s 00:02:31.327 user 12m1.841s 00:02:31.327 sys 2m50.143s 00:02:31.327 18:54:36 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:31.327 18:54:36 make -- common/autotest_common.sh@10 -- $ set +x 00:02:31.327 ************************************ 00:02:31.327 END TEST make 00:02:31.327 ************************************ 00:02:31.327 18:54:36 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:31.327 18:54:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:31.327 18:54:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:31.327 18:54:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.327 18:54:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:31.327 18:54:36 -- pm/common@44 -- $ pid=1431560 00:02:31.327 18:54:36 -- pm/common@50 -- $ kill -TERM 1431560 00:02:31.327 18:54:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.327 18:54:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:31.327 18:54:36 -- pm/common@44 -- $ pid=1431562 00:02:31.327 18:54:36 -- pm/common@50 -- $ kill -TERM 1431562 00:02:31.327 18:54:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.327 18:54:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:31.327 18:54:36 -- pm/common@44 -- $ pid=1431563 00:02:31.327 18:54:36 -- pm/common@50 -- $ kill -TERM 1431563 00:02:31.327 18:54:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.327 18:54:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:31.327 18:54:36 -- pm/common@44 -- $ pid=1431593 00:02:31.327 18:54:36 -- pm/common@50 -- $ sudo -E kill -TERM 1431593 00:02:31.327 18:54:36 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:31.327 18:54:36 -- nvmf/common.sh@7 -- # uname -s 00:02:31.327 18:54:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:31.327 18:54:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:31.327 18:54:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:31.327 18:54:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:31.327 18:54:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:31.327 18:54:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:31.327 18:54:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:31.327 18:54:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:31.327 18:54:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:31.327 18:54:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:31.327 18:54:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:02:31.327 18:54:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:02:31.327 18:54:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:31.327 18:54:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:31.327 18:54:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:31.327 18:54:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:31.327 18:54:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:31.327 18:54:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:31.327 18:54:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.327 18:54:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.327 18:54:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.327 18:54:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.328 18:54:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.328 18:54:36 -- paths/export.sh@5 -- # export PATH 00:02:31.328 18:54:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.328 18:54:36 -- nvmf/common.sh@47 -- # : 0 00:02:31.328 18:54:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:31.328 18:54:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:31.328 18:54:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:31.328 18:54:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:31.328 18:54:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:31.328 18:54:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:31.328 18:54:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:31.328 18:54:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:31.328 18:54:36 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:31.328 18:54:36 -- spdk/autotest.sh@32 -- # uname -s 00:02:31.328 18:54:36 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:31.328 18:54:36 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:31.328 18:54:36 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.328 18:54:36 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:31.328 18:54:36 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.328 18:54:36 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:31.328 18:54:36 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:31.328 18:54:36 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:31.328 18:54:36 -- spdk/autotest.sh@48 -- # udevadm_pid=1491087 00:02:31.328 18:54:36 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:31.328 18:54:36 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:31.328 18:54:36 -- pm/common@17 -- # local monitor 00:02:31.328 18:54:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.328 18:54:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.328 18:54:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.328 18:54:36 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.328 18:54:36 -- pm/common@21 -- # date +%s 00:02:31.328 18:54:36 -- pm/common@21 -- # date +%s 00:02:31.328 18:54:36 -- pm/common@25 -- # sleep 1 00:02:31.328 18:54:36 -- pm/common@21 -- # date +%s 00:02:31.328 18:54:36 -- pm/common@21 -- # date +%s 00:02:31.328 18:54:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840076 00:02:31.328 18:54:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840076 00:02:31.328 18:54:36 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840076 00:02:31.328 18:54:36 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721840076 00:02:31.328 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840076_collect-vmstat.pm.log 00:02:31.328 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840076_collect-cpu-load.pm.log 00:02:31.328 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840076_collect-cpu-temp.pm.log 00:02:31.587 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721840076_collect-bmc-pm.bmc.pm.log 00:02:32.525 18:54:37 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.525 18:54:37 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:32.525 18:54:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:32.525 18:54:37 -- common/autotest_common.sh@10 -- # set +x 00:02:32.525 18:54:37 -- spdk/autotest.sh@59 -- # create_test_list 00:02:32.525 18:54:37 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:32.525 18:54:37 -- common/autotest_common.sh@10 -- # set +x 00:02:32.525 18:54:38 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:32.525 18:54:38 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.525 18:54:38 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.525 18:54:38 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:32.525 18:54:38 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.525 18:54:38 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:32.525 18:54:38 -- common/autotest_common.sh@1455 -- # uname 00:02:32.525 18:54:38 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:32.525 18:54:38 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:32.525 18:54:38 -- common/autotest_common.sh@1475 -- # uname 00:02:32.525 18:54:38 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:32.525 18:54:38 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:32.525 18:54:38 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:32.525 18:54:38 -- spdk/autotest.sh@72 -- # hash lcov 00:02:32.525 18:54:38 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:32.525 18:54:38 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:32.525 --rc lcov_branch_coverage=1 00:02:32.525 --rc lcov_function_coverage=1 00:02:32.525 --rc genhtml_branch_coverage=1 00:02:32.525 --rc genhtml_function_coverage=1 00:02:32.525 --rc genhtml_legend=1 00:02:32.525 --rc geninfo_all_blocks=1 00:02:32.525 ' 00:02:32.525 18:54:38 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:32.525 --rc lcov_branch_coverage=1 00:02:32.525 --rc lcov_function_coverage=1 00:02:32.525 --rc genhtml_branch_coverage=1 00:02:32.525 --rc genhtml_function_coverage=1 00:02:32.525 --rc genhtml_legend=1 00:02:32.525 --rc geninfo_all_blocks=1 00:02:32.525 ' 00:02:32.525 18:54:38 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:32.525 --rc lcov_branch_coverage=1 00:02:32.525 --rc lcov_function_coverage=1 00:02:32.525 --rc genhtml_branch_coverage=1 00:02:32.525 --rc genhtml_function_coverage=1 00:02:32.525 --rc genhtml_legend=1 00:02:32.525 --rc geninfo_all_blocks=1 00:02:32.525 --no-external' 00:02:32.525 18:54:38 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:32.525 --rc lcov_branch_coverage=1 00:02:32.525 --rc lcov_function_coverage=1 00:02:32.525 --rc genhtml_branch_coverage=1 00:02:32.525 --rc genhtml_function_coverage=1 00:02:32.525 --rc genhtml_legend=1 00:02:32.525 --rc geninfo_all_blocks=1 00:02:32.525 --no-external' 00:02:32.525 18:54:38 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:32.525 lcov: LCOV version 1.14 00:02:32.525 18:54:38 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:47.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:47.407 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:02.316 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:02.316 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:02.316 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:02.316 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:02.316 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:02.316 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:02.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:02.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:02.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:07.587 18:55:12 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:07.587 18:55:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:07.587 18:55:12 -- common/autotest_common.sh@10 -- # set +x 00:03:07.587 18:55:12 -- spdk/autotest.sh@91 -- # rm -f 00:03:07.587 18:55:12 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.523 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:03:08.523 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:08.523 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:08.523 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:08.523 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:08.523 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:08.523 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:08.523 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:08.523 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:08.523 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:08.523 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:08.523 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:08.523 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:08.523 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:08.523 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:08.782 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:08.782 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:08.782 18:55:14 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:08.782 18:55:14 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:08.782 18:55:14 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:08.782 18:55:14 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:08.782 18:55:14 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:08.782 18:55:14 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:08.782 18:55:14 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:08.782 18:55:14 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.782 18:55:14 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:08.782 18:55:14 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:08.782 18:55:14 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.782 18:55:14 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.782 18:55:14 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:08.782 18:55:14 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:08.782 18:55:14 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.782 No valid GPT data, bailing 00:03:08.782 18:55:14 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.782 18:55:14 -- scripts/common.sh@391 -- # pt= 00:03:08.782 18:55:14 -- scripts/common.sh@392 -- # return 1 00:03:08.782 18:55:14 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.782 1+0 records in 00:03:08.782 1+0 records out 00:03:08.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00347127 s, 302 MB/s 00:03:08.782 18:55:14 -- spdk/autotest.sh@118 -- # sync 00:03:08.782 18:55:14 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.782 18:55:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.782 18:55:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:11.316 18:55:16 -- spdk/autotest.sh@124 -- # uname -s 00:03:11.316 18:55:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:11.316 18:55:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:11.317 18:55:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:11.317 18:55:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:11.317 18:55:16 -- common/autotest_common.sh@10 -- # set +x 00:03:11.575 ************************************ 00:03:11.575 START TEST setup.sh 00:03:11.575 ************************************ 00:03:11.575 18:55:17 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:11.575 * Looking for test storage... 00:03:11.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.575 18:55:17 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:11.575 18:55:17 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:11.575 18:55:17 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:11.575 18:55:17 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:11.575 18:55:17 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:11.575 18:55:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.575 ************************************ 00:03:11.575 START TEST acl 00:03:11.575 ************************************ 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:11.575 * Looking for test storage... 00:03:11.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.575 18:55:17 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.575 18:55:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:11.576 18:55:17 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:11.576 18:55:17 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:11.576 18:55:17 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:11.576 18:55:17 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:11.576 18:55:17 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:11.576 18:55:17 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.576 18:55:17 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:13.479 18:55:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:13.479 18:55:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:13.479 18:55:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.479 18:55:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:13.479 18:55:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.479 18:55:19 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:15.384 Hugepages 00:03:15.384 node hugesize free / total 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 00:03:15.384 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.384 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:15.385 18:55:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:15.385 18:55:20 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.385 18:55:20 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.385 18:55:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:15.385 ************************************ 00:03:15.385 START TEST denied 00:03:15.385 ************************************ 00:03:15.385 18:55:20 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:15.385 18:55:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:03:15.385 18:55:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:15.385 18:55:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:03:15.385 18:55:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.385 18:55:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.287 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.287 18:55:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.574 00:03:20.574 real 0m4.842s 00:03:20.574 user 0m1.411s 00:03:20.574 sys 0m2.474s 00:03:20.574 18:55:25 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:20.574 18:55:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:20.574 ************************************ 00:03:20.574 END TEST denied 00:03:20.574 ************************************ 00:03:20.574 18:55:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:20.574 18:55:25 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.574 18:55:25 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.574 18:55:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:20.574 ************************************ 00:03:20.574 START TEST allowed 00:03:20.574 ************************************ 00:03:20.574 18:55:25 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:20.574 18:55:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:03:20.574 18:55:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:20.574 18:55:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:03:20.574 18:55:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.574 18:55:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:23.123 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.123 18:55:28 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:23.123 18:55:28 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:23.123 18:55:28 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:23.123 18:55:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.123 18:55:28 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.028 00:03:25.028 real 0m4.752s 00:03:25.028 user 0m1.318s 00:03:25.028 sys 0m2.309s 00:03:25.028 18:55:30 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:25.028 18:55:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:25.028 ************************************ 00:03:25.028 END TEST allowed 00:03:25.028 ************************************ 00:03:25.028 00:03:25.028 real 0m13.430s 00:03:25.028 user 0m4.239s 00:03:25.028 sys 0m7.227s 00:03:25.028 18:55:30 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:25.028 18:55:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.028 ************************************ 00:03:25.028 END TEST acl 00:03:25.028 ************************************ 00:03:25.028 18:55:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:25.028 18:55:30 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.028 18:55:30 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.028 18:55:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.028 ************************************ 00:03:25.028 START TEST hugepages 00:03:25.028 ************************************ 00:03:25.028 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:25.028 * Looking for test storage... 00:03:25.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.028 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 27216848 kB' 'MemAvailable: 30794720 kB' 'Buffers: 2704 kB' 'Cached: 10187564 kB' 'SwapCached: 0 kB' 'Active: 7185228 kB' 'Inactive: 3506120 kB' 'Active(anon): 6791556 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504352 kB' 'Mapped: 199312 kB' 'Shmem: 6290476 kB' 'KReclaimable: 180236 kB' 'Slab: 528420 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 348184 kB' 'KernelStack: 12512 kB' 'PageTables: 8120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304780 kB' 'Committed_AS: 7931136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.029 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.030 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.289 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:25.290 18:55:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:25.290 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.290 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.290 18:55:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.290 ************************************ 00:03:25.290 START TEST default_setup 00:03:25.290 ************************************ 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.290 18:55:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.190 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:27.190 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:27.190 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:27.190 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:27.190 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:27.190 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:27.190 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:27.190 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:27.190 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:27.755 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29296192 kB' 'MemAvailable: 32874064 kB' 'Buffers: 2704 kB' 'Cached: 10187660 kB' 'SwapCached: 0 kB' 'Active: 7203508 kB' 'Inactive: 3506120 kB' 'Active(anon): 6809836 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522436 kB' 'Mapped: 198428 kB' 'Shmem: 6290572 kB' 'KReclaimable: 180236 kB' 'Slab: 528252 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 348016 kB' 'KernelStack: 12416 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7915676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.016 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.017 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29296712 kB' 'MemAvailable: 32874584 kB' 'Buffers: 2704 kB' 'Cached: 10187664 kB' 'SwapCached: 0 kB' 'Active: 7202472 kB' 'Inactive: 3506120 kB' 'Active(anon): 6808800 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521484 kB' 'Mapped: 198364 kB' 'Shmem: 6290576 kB' 'KReclaimable: 180236 kB' 'Slab: 528240 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 348004 kB' 'KernelStack: 12496 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7915696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.018 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.019 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.020 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29297408 kB' 'MemAvailable: 32875280 kB' 'Buffers: 2704 kB' 'Cached: 10187680 kB' 'SwapCached: 0 kB' 'Active: 7202464 kB' 'Inactive: 3506120 kB' 'Active(anon): 6808792 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521508 kB' 'Mapped: 198364 kB' 'Shmem: 6290592 kB' 'KReclaimable: 180236 kB' 'Slab: 528240 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 348004 kB' 'KernelStack: 12416 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7915716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.021 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.282 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.283 nr_hugepages=1024 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.283 resv_hugepages=0 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.283 surplus_hugepages=0 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.283 anon_hugepages=0 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29297408 kB' 'MemAvailable: 32875280 kB' 'Buffers: 2704 kB' 'Cached: 10187720 kB' 'SwapCached: 0 kB' 'Active: 7202148 kB' 'Inactive: 3506120 kB' 'Active(anon): 6808476 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521104 kB' 'Mapped: 198364 kB' 'Shmem: 6290632 kB' 'KReclaimable: 180236 kB' 'Slab: 528240 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 348004 kB' 'KernelStack: 12400 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7915740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.283 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.284 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12995212 kB' 'MemUsed: 11624200 kB' 'SwapCached: 0 kB' 'Active: 5369324 kB' 'Inactive: 3329772 kB' 'Active(anon): 5110692 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8401480 kB' 'Mapped: 85012 kB' 'AnonPages: 300728 kB' 'Shmem: 4813076 kB' 'KernelStack: 7688 kB' 'PageTables: 4400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110628 kB' 'Slab: 280304 kB' 'SReclaimable: 110628 kB' 'SUnreclaim: 169676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.285 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.286 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.287 node0=1024 expecting 1024 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.287 00:03:28.287 real 0m3.046s 00:03:28.287 user 0m0.938s 00:03:28.287 sys 0m1.239s 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:28.287 18:55:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:28.287 ************************************ 00:03:28.287 END TEST default_setup 00:03:28.287 ************************************ 00:03:28.287 18:55:33 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:28.287 18:55:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:28.287 18:55:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:28.287 18:55:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.287 ************************************ 00:03:28.287 START TEST per_node_1G_alloc 00:03:28.287 ************************************ 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.287 18:55:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.191 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.191 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.191 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.191 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.191 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.191 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.191 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.191 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.191 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.191 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:30.191 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:30.191 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:30.191 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:30.191 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:30.191 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:30.191 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:30.191 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29297092 kB' 'MemAvailable: 32874964 kB' 'Buffers: 2704 kB' 'Cached: 10187780 kB' 'SwapCached: 0 kB' 'Active: 7206416 kB' 'Inactive: 3506120 kB' 'Active(anon): 6812744 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525264 kB' 'Mapped: 198944 kB' 'Shmem: 6290692 kB' 'KReclaimable: 180236 kB' 'Slab: 528104 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 347868 kB' 'KernelStack: 12432 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7920180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.191 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.192 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29301920 kB' 'MemAvailable: 32879792 kB' 'Buffers: 2704 kB' 'Cached: 10187784 kB' 'SwapCached: 0 kB' 'Active: 7208568 kB' 'Inactive: 3506120 kB' 'Active(anon): 6814896 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527368 kB' 'Mapped: 199272 kB' 'Shmem: 6290696 kB' 'KReclaimable: 180236 kB' 'Slab: 528080 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 347844 kB' 'KernelStack: 12432 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7922192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195652 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.193 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.194 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29302792 kB' 'MemAvailable: 32880664 kB' 'Buffers: 2704 kB' 'Cached: 10187784 kB' 'SwapCached: 0 kB' 'Active: 7205164 kB' 'Inactive: 3506120 kB' 'Active(anon): 6811492 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524012 kB' 'Mapped: 198812 kB' 'Shmem: 6290696 kB' 'KReclaimable: 180236 kB' 'Slab: 528080 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 347844 kB' 'KernelStack: 12448 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7921664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.195 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.196 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.197 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.198 nr_hugepages=1024 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.198 resv_hugepages=0 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.198 surplus_hugepages=0 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.198 anon_hugepages=0 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29296820 kB' 'MemAvailable: 32874692 kB' 'Buffers: 2704 kB' 'Cached: 10187824 kB' 'SwapCached: 0 kB' 'Active: 7209536 kB' 'Inactive: 3506120 kB' 'Active(anon): 6815864 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528340 kB' 'Mapped: 199280 kB' 'Shmem: 6290736 kB' 'KReclaimable: 180236 kB' 'Slab: 528072 kB' 'SReclaimable: 180236 kB' 'SUnreclaim: 347836 kB' 'KernelStack: 12704 kB' 'PageTables: 9164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7924596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195812 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.198 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.461 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.462 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 14035336 kB' 'MemUsed: 10584076 kB' 'SwapCached: 0 kB' 'Active: 5370820 kB' 'Inactive: 3329772 kB' 'Active(anon): 5112188 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8401488 kB' 'Mapped: 85012 kB' 'AnonPages: 302228 kB' 'Shmem: 4813084 kB' 'KernelStack: 8088 kB' 'PageTables: 5348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110628 kB' 'Slab: 280280 kB' 'SReclaimable: 110628 kB' 'SUnreclaim: 169652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.463 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15278696 kB' 'MemUsed: 4128548 kB' 'SwapCached: 0 kB' 'Active: 1834440 kB' 'Inactive: 176348 kB' 'Active(anon): 1699400 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789084 kB' 'Mapped: 113364 kB' 'AnonPages: 221876 kB' 'Shmem: 1477696 kB' 'KernelStack: 4760 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69608 kB' 'Slab: 247796 kB' 'SReclaimable: 69608 kB' 'SUnreclaim: 178188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.464 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.465 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.466 node0=512 expecting 512 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:30.466 node1=512 expecting 512 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.466 00:03:30.466 real 0m2.113s 00:03:30.466 user 0m0.915s 00:03:30.466 sys 0m1.181s 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.466 18:55:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.466 ************************************ 00:03:30.466 END TEST per_node_1G_alloc 00:03:30.466 ************************************ 00:03:30.466 18:55:36 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:30.466 18:55:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.466 18:55:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.466 18:55:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.466 ************************************ 00:03:30.466 START TEST even_2G_alloc 00:03:30.466 ************************************ 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:30.466 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:30.467 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.467 18:55:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.842 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:31.842 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.842 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:31.842 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:31.842 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:31.842 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:31.842 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:31.842 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:31.842 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:31.842 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:31.842 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:31.842 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:31.842 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:31.842 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:31.842 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:31.842 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:31.842 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29298900 kB' 'MemAvailable: 32876756 kB' 'Buffers: 2704 kB' 'Cached: 10187912 kB' 'SwapCached: 0 kB' 'Active: 7200276 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806604 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518960 kB' 'Mapped: 197496 kB' 'Shmem: 6290824 kB' 'KReclaimable: 180204 kB' 'Slab: 527900 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347696 kB' 'KernelStack: 12400 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7903932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.104 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.105 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29299192 kB' 'MemAvailable: 32877048 kB' 'Buffers: 2704 kB' 'Cached: 10187916 kB' 'SwapCached: 0 kB' 'Active: 7199912 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806240 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518608 kB' 'Mapped: 197440 kB' 'Shmem: 6290828 kB' 'KReclaimable: 180204 kB' 'Slab: 527888 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347684 kB' 'KernelStack: 12384 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7903952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.106 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.107 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.108 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29299892 kB' 'MemAvailable: 32877748 kB' 'Buffers: 2704 kB' 'Cached: 10187932 kB' 'SwapCached: 0 kB' 'Active: 7199836 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806164 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518484 kB' 'Mapped: 197364 kB' 'Shmem: 6290844 kB' 'KReclaimable: 180204 kB' 'Slab: 527868 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347664 kB' 'KernelStack: 12416 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7903972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.109 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.110 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.111 nr_hugepages=1024 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.111 resv_hugepages=0 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.111 surplus_hugepages=0 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.111 anon_hugepages=0 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.111 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29300208 kB' 'MemAvailable: 32878064 kB' 'Buffers: 2704 kB' 'Cached: 10187956 kB' 'SwapCached: 0 kB' 'Active: 7199868 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806196 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518488 kB' 'Mapped: 197364 kB' 'Shmem: 6290868 kB' 'KReclaimable: 180204 kB' 'Slab: 527868 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347664 kB' 'KernelStack: 12416 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7903996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.112 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.373 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.374 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 14040708 kB' 'MemUsed: 10578704 kB' 'SwapCached: 0 kB' 'Active: 5367272 kB' 'Inactive: 3329772 kB' 'Active(anon): 5108640 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8401496 kB' 'Mapped: 84292 kB' 'AnonPages: 298620 kB' 'Shmem: 4813092 kB' 'KernelStack: 7736 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110628 kB' 'Slab: 280180 kB' 'SReclaimable: 110628 kB' 'SUnreclaim: 169552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.375 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.376 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15259392 kB' 'MemUsed: 4147852 kB' 'SwapCached: 0 kB' 'Active: 1832600 kB' 'Inactive: 176348 kB' 'Active(anon): 1697560 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789220 kB' 'Mapped: 113072 kB' 'AnonPages: 219836 kB' 'Shmem: 1477832 kB' 'KernelStack: 4680 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69576 kB' 'Slab: 247688 kB' 'SReclaimable: 69576 kB' 'SUnreclaim: 178112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.377 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:32.378 node0=512 expecting 512 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:32.378 node1=512 expecting 512 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:32.378 00:03:32.378 real 0m1.858s 00:03:32.378 user 0m0.785s 00:03:32.378 sys 0m1.039s 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:32.378 18:55:37 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.378 ************************************ 00:03:32.378 END TEST even_2G_alloc 00:03:32.378 ************************************ 00:03:32.378 18:55:37 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:32.379 18:55:37 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:32.379 18:55:37 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:32.379 18:55:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.379 ************************************ 00:03:32.379 START TEST odd_alloc 00:03:32.379 ************************************ 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.379 18:55:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.286 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:34.286 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.286 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:34.287 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:34.287 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:34.287 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:34.287 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:34.287 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:34.287 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:34.287 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:34.287 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:34.287 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:34.287 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:34.287 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:34.287 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:34.287 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:34.287 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29303688 kB' 'MemAvailable: 32881544 kB' 'Buffers: 2704 kB' 'Cached: 10188048 kB' 'SwapCached: 0 kB' 'Active: 7200132 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806460 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518668 kB' 'Mapped: 197416 kB' 'Shmem: 6290960 kB' 'KReclaimable: 180204 kB' 'Slab: 527892 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347688 kB' 'KernelStack: 12416 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7904068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.287 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.288 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29303512 kB' 'MemAvailable: 32881368 kB' 'Buffers: 2704 kB' 'Cached: 10188052 kB' 'SwapCached: 0 kB' 'Active: 7200220 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806548 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518884 kB' 'Mapped: 197368 kB' 'Shmem: 6290964 kB' 'KReclaimable: 180204 kB' 'Slab: 527936 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347732 kB' 'KernelStack: 12448 kB' 'PageTables: 7796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7903720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.289 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.290 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29303260 kB' 'MemAvailable: 32881116 kB' 'Buffers: 2704 kB' 'Cached: 10188068 kB' 'SwapCached: 0 kB' 'Active: 7199736 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806064 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518296 kB' 'Mapped: 197368 kB' 'Shmem: 6290980 kB' 'KReclaimable: 180204 kB' 'Slab: 527936 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347732 kB' 'KernelStack: 12416 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7903872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.291 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.292 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:34.293 nr_hugepages=1025 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.293 resv_hugepages=0 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.293 surplus_hugepages=0 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.293 anon_hugepages=0 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29303304 kB' 'MemAvailable: 32881160 kB' 'Buffers: 2704 kB' 'Cached: 10188088 kB' 'SwapCached: 0 kB' 'Active: 7199724 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806052 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518296 kB' 'Mapped: 197368 kB' 'Shmem: 6291000 kB' 'KReclaimable: 180204 kB' 'Slab: 527936 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347732 kB' 'KernelStack: 12416 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7903896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.293 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.294 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 14045324 kB' 'MemUsed: 10574088 kB' 'SwapCached: 0 kB' 'Active: 5366904 kB' 'Inactive: 3329772 kB' 'Active(anon): 5108272 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8401580 kB' 'Mapped: 84280 kB' 'AnonPages: 298272 kB' 'Shmem: 4813176 kB' 'KernelStack: 7752 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110628 kB' 'Slab: 280312 kB' 'SReclaimable: 110628 kB' 'SUnreclaim: 169684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.295 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.296 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15258120 kB' 'MemUsed: 4149124 kB' 'SwapCached: 0 kB' 'Active: 1833160 kB' 'Inactive: 176348 kB' 'Active(anon): 1698120 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789236 kB' 'Mapped: 113088 kB' 'AnonPages: 220376 kB' 'Shmem: 1477848 kB' 'KernelStack: 4696 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69576 kB' 'Slab: 247624 kB' 'SReclaimable: 69576 kB' 'SUnreclaim: 178048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.297 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.298 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:34.558 node0=512 expecting 513 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:34.558 node1=513 expecting 512 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:34.558 00:03:34.558 real 0m1.974s 00:03:34.558 user 0m0.808s 00:03:34.558 sys 0m1.139s 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:34.558 18:55:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:34.558 ************************************ 00:03:34.558 END TEST odd_alloc 00:03:34.558 ************************************ 00:03:34.558 18:55:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:34.558 18:55:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:34.558 18:55:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:34.558 18:55:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:34.558 ************************************ 00:03:34.558 START TEST custom_alloc 00:03:34.558 ************************************ 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:34.558 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.559 18:55:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.935 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:35.935 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.935 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:35.935 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:35.935 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:35.935 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:35.935 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:36.196 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:36.196 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:36.196 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:36.196 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:36.196 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:36.196 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:36.196 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:36.196 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:36.196 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:36.196 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28232328 kB' 'MemAvailable: 31810184 kB' 'Buffers: 2704 kB' 'Cached: 10188184 kB' 'SwapCached: 0 kB' 'Active: 7205416 kB' 'Inactive: 3506120 kB' 'Active(anon): 6811744 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523920 kB' 'Mapped: 197816 kB' 'Shmem: 6291096 kB' 'KReclaimable: 180204 kB' 'Slab: 527912 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347708 kB' 'KernelStack: 12832 kB' 'PageTables: 9188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7912976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.196 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.197 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28236956 kB' 'MemAvailable: 31814812 kB' 'Buffers: 2704 kB' 'Cached: 10188188 kB' 'SwapCached: 0 kB' 'Active: 7198820 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805148 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517260 kB' 'Mapped: 197380 kB' 'Shmem: 6291100 kB' 'KReclaimable: 180204 kB' 'Slab: 528036 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347832 kB' 'KernelStack: 12384 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7904516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.198 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.199 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.465 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28237576 kB' 'MemAvailable: 31815432 kB' 'Buffers: 2704 kB' 'Cached: 10188204 kB' 'SwapCached: 0 kB' 'Active: 7199000 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805328 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517464 kB' 'Mapped: 197380 kB' 'Shmem: 6291116 kB' 'KReclaimable: 180204 kB' 'Slab: 528100 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347896 kB' 'KernelStack: 12464 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7904536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.466 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.467 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:36.468 nr_hugepages=1536 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.468 resv_hugepages=0 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.468 surplus_hugepages=0 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.468 anon_hugepages=0 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.468 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28237768 kB' 'MemAvailable: 31815624 kB' 'Buffers: 2704 kB' 'Cached: 10188224 kB' 'SwapCached: 0 kB' 'Active: 7198980 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805308 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517460 kB' 'Mapped: 197380 kB' 'Shmem: 6291136 kB' 'KReclaimable: 180204 kB' 'Slab: 528100 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347896 kB' 'KernelStack: 12464 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7904556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.469 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.470 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 14037680 kB' 'MemUsed: 10581732 kB' 'SwapCached: 0 kB' 'Active: 5366096 kB' 'Inactive: 3329772 kB' 'Active(anon): 5107464 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8401632 kB' 'Mapped: 84280 kB' 'AnonPages: 297452 kB' 'Shmem: 4813228 kB' 'KernelStack: 7784 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110628 kB' 'Slab: 280356 kB' 'SReclaimable: 110628 kB' 'SUnreclaim: 169728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.471 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.472 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 14214752 kB' 'MemUsed: 5192492 kB' 'SwapCached: 0 kB' 'Active: 1833132 kB' 'Inactive: 176348 kB' 'Active(anon): 1698092 kB' 'Inactive(anon): 0 kB' 'Active(file): 135040 kB' 'Inactive(file): 176348 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1789340 kB' 'Mapped: 113100 kB' 'AnonPages: 220200 kB' 'Shmem: 1477952 kB' 'KernelStack: 4696 kB' 'PageTables: 3568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69576 kB' 'Slab: 247740 kB' 'SReclaimable: 69576 kB' 'SUnreclaim: 178164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.473 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.474 node0=512 expecting 512 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:36.474 node1=1024 expecting 1024 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:36.474 00:03:36.474 real 0m2.070s 00:03:36.474 user 0m0.887s 00:03:36.474 sys 0m1.168s 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:36.474 18:55:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:36.474 ************************************ 00:03:36.474 END TEST custom_alloc 00:03:36.474 ************************************ 00:03:36.474 18:55:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:36.474 18:55:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:36.474 18:55:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:36.474 18:55:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:36.771 ************************************ 00:03:36.771 START TEST no_shrink_alloc 00:03:36.771 ************************************ 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.771 18:55:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.151 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:38.151 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.151 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:38.151 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:38.151 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:38.151 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:38.151 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:38.151 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:38.151 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:38.151 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:38.151 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:38.151 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:38.151 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:38.151 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:38.151 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:38.151 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:38.151 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.151 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29293644 kB' 'MemAvailable: 32871500 kB' 'Buffers: 2704 kB' 'Cached: 10188308 kB' 'SwapCached: 0 kB' 'Active: 7199724 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806052 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517996 kB' 'Mapped: 197464 kB' 'Shmem: 6291220 kB' 'KReclaimable: 180204 kB' 'Slab: 527964 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347760 kB' 'KernelStack: 12496 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7904720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.152 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29297580 kB' 'MemAvailable: 32875436 kB' 'Buffers: 2704 kB' 'Cached: 10188312 kB' 'SwapCached: 0 kB' 'Active: 7199376 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805704 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517620 kB' 'Mapped: 197392 kB' 'Shmem: 6291224 kB' 'KReclaimable: 180204 kB' 'Slab: 528020 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347816 kB' 'KernelStack: 12480 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7904736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:38.153 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.154 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.418 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29297580 kB' 'MemAvailable: 32875436 kB' 'Buffers: 2704 kB' 'Cached: 10188316 kB' 'SwapCached: 0 kB' 'Active: 7199108 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805436 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517344 kB' 'Mapped: 197392 kB' 'Shmem: 6291228 kB' 'KReclaimable: 180204 kB' 'Slab: 528020 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347816 kB' 'KernelStack: 12480 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7904760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.419 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.420 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.421 nr_hugepages=1024 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.421 resv_hugepages=0 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.421 surplus_hugepages=0 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.421 anon_hugepages=0 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29297328 kB' 'MemAvailable: 32875184 kB' 'Buffers: 2704 kB' 'Cached: 10188352 kB' 'SwapCached: 0 kB' 'Active: 7199664 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805992 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517944 kB' 'Mapped: 197392 kB' 'Shmem: 6291264 kB' 'KReclaimable: 180204 kB' 'Slab: 528012 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347808 kB' 'KernelStack: 12480 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7904780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.421 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.422 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.423 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12990748 kB' 'MemUsed: 11628664 kB' 'SwapCached: 0 kB' 'Active: 5366816 kB' 'Inactive: 3329772 kB' 'Active(anon): 5108184 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8401708 kB' 'Mapped: 84280 kB' 'AnonPages: 297984 kB' 'Shmem: 4813304 kB' 'KernelStack: 7784 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110628 kB' 'Slab: 280416 kB' 'SReclaimable: 110628 kB' 'SUnreclaim: 169788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.424 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.425 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:38.425 node0=1024 expecting 1024 00:03:38.426 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:38.426 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:38.426 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:38.426 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:38.426 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.426 18:55:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:39.802 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:39.802 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:39.802 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:39.802 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:39.802 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:39.802 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:39.802 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:40.062 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:40.062 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:40.062 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:40.062 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:40.062 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:40.062 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:40.062 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:40.062 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:40.062 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:40.062 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:40.062 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29292108 kB' 'MemAvailable: 32869964 kB' 'Buffers: 2704 kB' 'Cached: 10188424 kB' 'SwapCached: 0 kB' 'Active: 7199040 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805368 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517184 kB' 'Mapped: 197476 kB' 'Shmem: 6291336 kB' 'KReclaimable: 180204 kB' 'Slab: 527944 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347740 kB' 'KernelStack: 12432 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7904596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.062 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.063 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29292980 kB' 'MemAvailable: 32870836 kB' 'Buffers: 2704 kB' 'Cached: 10188428 kB' 'SwapCached: 0 kB' 'Active: 7199272 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805600 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517408 kB' 'Mapped: 197400 kB' 'Shmem: 6291340 kB' 'KReclaimable: 180204 kB' 'Slab: 527968 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347764 kB' 'KernelStack: 12464 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7904616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.064 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.065 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.326 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.326 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.326 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.326 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.327 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29292916 kB' 'MemAvailable: 32870772 kB' 'Buffers: 2704 kB' 'Cached: 10188452 kB' 'SwapCached: 0 kB' 'Active: 7199628 kB' 'Inactive: 3506120 kB' 'Active(anon): 6805956 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517768 kB' 'Mapped: 197400 kB' 'Shmem: 6291364 kB' 'KReclaimable: 180204 kB' 'Slab: 527968 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347764 kB' 'KernelStack: 12496 kB' 'PageTables: 7784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7905008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.328 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.329 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.330 nr_hugepages=1024 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.330 resv_hugepages=0 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.330 surplus_hugepages=0 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.330 anon_hugepages=0 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29292468 kB' 'MemAvailable: 32870324 kB' 'Buffers: 2704 kB' 'Cached: 10188472 kB' 'SwapCached: 0 kB' 'Active: 7199684 kB' 'Inactive: 3506120 kB' 'Active(anon): 6806012 kB' 'Inactive(anon): 0 kB' 'Active(file): 393672 kB' 'Inactive(file): 3506120 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517800 kB' 'Mapped: 197400 kB' 'Shmem: 6291384 kB' 'KReclaimable: 180204 kB' 'Slab: 527968 kB' 'SReclaimable: 180204 kB' 'SUnreclaim: 347764 kB' 'KernelStack: 12496 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7905028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 33216 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1715804 kB' 'DirectMap2M: 11835392 kB' 'DirectMap1G: 38797312 kB' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.330 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.331 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.332 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12982768 kB' 'MemUsed: 11636644 kB' 'SwapCached: 0 kB' 'Active: 5366340 kB' 'Inactive: 3329772 kB' 'Active(anon): 5107708 kB' 'Inactive(anon): 0 kB' 'Active(file): 258632 kB' 'Inactive(file): 3329772 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8401796 kB' 'Mapped: 84280 kB' 'AnonPages: 297456 kB' 'Shmem: 4813392 kB' 'KernelStack: 7784 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110628 kB' 'Slab: 280348 kB' 'SReclaimable: 110628 kB' 'SUnreclaim: 169720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.333 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.334 node0=1024 expecting 1024 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.334 00:03:40.334 real 0m3.784s 00:03:40.334 user 0m1.529s 00:03:40.334 sys 0m2.219s 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.334 18:55:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.334 ************************************ 00:03:40.334 END TEST no_shrink_alloc 00:03:40.334 ************************************ 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.334 18:55:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.334 00:03:40.334 real 0m15.369s 00:03:40.334 user 0m6.071s 00:03:40.334 sys 0m8.327s 00:03:40.334 18:55:45 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:40.334 18:55:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.334 ************************************ 00:03:40.334 END TEST hugepages 00:03:40.334 ************************************ 00:03:40.593 18:55:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.593 18:55:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.593 18:55:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.593 18:55:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.593 ************************************ 00:03:40.593 START TEST driver 00:03:40.593 ************************************ 00:03:40.593 18:55:46 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.593 * Looking for test storage... 00:03:40.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.593 18:55:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:40.593 18:55:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.593 18:55:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.878 18:55:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:43.878 18:55:49 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:43.878 18:55:49 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:43.878 18:55:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:43.878 ************************************ 00:03:43.878 START TEST guess_driver 00:03:43.879 ************************************ 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:43.879 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.879 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.879 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:43.879 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:43.879 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:43.879 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:43.879 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:43.879 Looking for driver=vfio-pci 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.879 18:55:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.254 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:45.513 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.450 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:46.450 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:46.450 18:55:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:46.450 18:55:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:46.450 18:55:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:46.450 18:55:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.450 18:55:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.733 00:03:49.733 real 0m5.922s 00:03:49.733 user 0m1.488s 00:03:49.733 sys 0m2.617s 00:03:49.733 18:55:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.733 18:55:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.733 ************************************ 00:03:49.733 END TEST guess_driver 00:03:49.733 ************************************ 00:03:49.733 00:03:49.733 real 0m9.088s 00:03:49.733 user 0m2.176s 00:03:49.733 sys 0m4.048s 00:03:49.733 18:55:55 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.733 18:55:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.733 ************************************ 00:03:49.733 END TEST driver 00:03:49.733 ************************************ 00:03:49.733 18:55:55 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:49.733 18:55:55 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:49.733 18:55:55 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:49.733 18:55:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.733 ************************************ 00:03:49.733 START TEST devices 00:03:49.733 ************************************ 00:03:49.733 18:55:55 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:49.733 * Looking for test storage... 00:03:49.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:49.733 18:55:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:49.733 18:55:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:49.733 18:55:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.733 18:55:55 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.638 18:55:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:51.638 18:55:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:51.638 18:55:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:51.638 18:55:57 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:51.898 No valid GPT data, bailing 00:03:51.898 18:55:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.898 18:55:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:51.898 18:55:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:51.898 18:55:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:51.898 18:55:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:51.898 18:55:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:51.898 18:55:57 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:51.898 18:55:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:51.898 18:55:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:51.898 18:55:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:03:51.898 18:55:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:51.898 18:55:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:51.898 18:55:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:51.898 18:55:57 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.898 18:55:57 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.898 18:55:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:51.898 ************************************ 00:03:51.898 START TEST nvme_mount 00:03:51.898 ************************************ 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:51.898 18:55:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:52.833 Creating new GPT entries in memory. 00:03:52.833 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:52.833 other utilities. 00:03:52.833 18:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:52.833 18:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:52.833 18:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:52.833 18:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:52.833 18:55:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:54.212 Creating new GPT entries in memory. 00:03:54.212 The operation has completed successfully. 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1512221 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.212 18:55:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:55.603 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:55.863 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.863 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:56.121 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:56.121 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:56.121 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:56.121 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.121 18:56:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:57.499 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.759 18:56:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.136 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:59.137 18:56:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.396 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.396 00:03:59.396 real 0m7.585s 00:03:59.396 user 0m1.859s 00:03:59.396 sys 0m3.337s 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:59.396 18:56:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.396 ************************************ 00:03:59.396 END TEST nvme_mount 00:03:59.396 ************************************ 00:03:59.396 18:56:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:59.396 18:56:05 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.396 18:56:05 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.396 18:56:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.655 ************************************ 00:03:59.655 START TEST dm_mount 00:03:59.655 ************************************ 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.655 18:56:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:00.592 Creating new GPT entries in memory. 00:04:00.592 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.592 other utilities. 00:04:00.592 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.592 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.592 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.592 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.592 18:56:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:01.526 Creating new GPT entries in memory. 00:04:01.526 The operation has completed successfully. 00:04:01.526 18:56:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.526 18:56:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.526 18:56:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.526 18:56:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.527 18:56:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:02.902 The operation has completed successfully. 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1514730 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.902 18:56:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:04.278 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:04.279 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.279 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:04.279 18:56:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:04.279 18:56:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.279 18:56:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.654 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.655 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:05.655 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:05.913 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:05.913 00:04:05.913 real 0m6.503s 00:04:05.913 user 0m1.184s 00:04:05.913 sys 0m2.200s 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.913 18:56:11 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:05.913 ************************************ 00:04:05.913 END TEST dm_mount 00:04:05.913 ************************************ 00:04:06.171 18:56:11 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:06.171 18:56:11 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:06.171 18:56:11 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.171 18:56:11 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.171 18:56:11 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:06.171 18:56:11 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.171 18:56:11 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:06.430 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:06.430 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:06.430 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:06.430 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:06.430 18:56:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:06.430 18:56:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.431 18:56:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:06.431 18:56:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:06.431 18:56:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:06.431 18:56:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:06.431 18:56:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:06.431 00:04:06.431 real 0m16.687s 00:04:06.431 user 0m3.958s 00:04:06.431 sys 0m7.016s 00:04:06.431 18:56:11 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.431 18:56:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:06.431 ************************************ 00:04:06.431 END TEST devices 00:04:06.431 ************************************ 00:04:06.431 00:04:06.431 real 0m54.931s 00:04:06.431 user 0m16.594s 00:04:06.431 sys 0m26.846s 00:04:06.431 18:56:11 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.431 18:56:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:06.431 ************************************ 00:04:06.431 END TEST setup.sh 00:04:06.431 ************************************ 00:04:06.431 18:56:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:08.351 Hugepages 00:04:08.351 node hugesize free / total 00:04:08.351 node0 1048576kB 0 / 0 00:04:08.351 node0 2048kB 2048 / 2048 00:04:08.351 node1 1048576kB 0 / 0 00:04:08.351 node1 2048kB 0 / 0 00:04:08.351 00:04:08.351 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.351 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:08.351 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:08.351 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:08.351 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:08.351 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:08.351 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:08.351 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:08.351 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:08.351 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:08.351 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:08.351 18:56:13 -- spdk/autotest.sh@130 -- # uname -s 00:04:08.351 18:56:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:08.351 18:56:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:08.351 18:56:13 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.255 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.255 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.255 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.255 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.255 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.255 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.255 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.255 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:10.255 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:10.830 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:11.123 18:56:16 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:12.059 18:56:17 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:12.059 18:56:17 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:12.059 18:56:17 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:12.059 18:56:17 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:12.059 18:56:17 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:12.059 18:56:17 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:12.059 18:56:17 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.059 18:56:17 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:12.059 18:56:17 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:12.318 18:56:17 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:12.318 18:56:17 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:04:12.318 18:56:17 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.220 Waiting for block devices as requested 00:04:14.220 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:04:14.220 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:14.220 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:14.220 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:14.480 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:14.480 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:14.480 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:14.480 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:14.739 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:14.739 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:14.739 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:14.998 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:14.998 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:14.998 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:15.257 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:15.257 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:15.257 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:15.516 18:56:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:15.516 18:56:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:04:15.516 18:56:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:15.516 18:56:20 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:04:15.516 18:56:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:15.516 18:56:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:04:15.516 18:56:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:04:15.516 18:56:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:15.516 18:56:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:15.516 18:56:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:15.516 18:56:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:15.516 18:56:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:15.516 18:56:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:15.516 18:56:20 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:04:15.516 18:56:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:15.516 18:56:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:15.516 18:56:21 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:15.516 18:56:21 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:15.516 18:56:21 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:15.516 18:56:21 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:15.516 18:56:21 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:15.516 18:56:21 -- common/autotest_common.sh@1557 -- # continue 00:04:15.516 18:56:21 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:15.516 18:56:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:15.516 18:56:21 -- common/autotest_common.sh@10 -- # set +x 00:04:15.516 18:56:21 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:15.516 18:56:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.516 18:56:21 -- common/autotest_common.sh@10 -- # set +x 00:04:15.516 18:56:21 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:17.417 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.417 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:17.417 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:17.417 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:17.417 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:17.417 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:17.417 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:17.417 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:17.417 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:17.983 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.242 18:56:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:18.242 18:56:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.242 18:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:18.242 18:56:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:18.242 18:56:23 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:18.242 18:56:23 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:18.242 18:56:23 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:18.242 18:56:23 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:18.242 18:56:23 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:18.242 18:56:23 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:18.242 18:56:23 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:18.242 18:56:23 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.242 18:56:23 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:18.242 18:56:23 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:18.242 18:56:23 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:18.242 18:56:23 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:04:18.242 18:56:23 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:18.242 18:56:23 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:04:18.242 18:56:23 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:18.242 18:56:23 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:18.242 18:56:23 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:18.242 18:56:23 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:04:18.242 18:56:23 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:04:18.242 18:56:23 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1520261 00:04:18.242 18:56:23 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:18.242 18:56:23 -- common/autotest_common.sh@1598 -- # waitforlisten 1520261 00:04:18.242 18:56:23 -- common/autotest_common.sh@831 -- # '[' -z 1520261 ']' 00:04:18.242 18:56:23 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.242 18:56:23 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.242 18:56:23 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.242 18:56:23 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.242 18:56:23 -- common/autotest_common.sh@10 -- # set +x 00:04:18.500 [2024-07-24 18:56:23.993310] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:04:18.500 [2024-07-24 18:56:23.993404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520261 ] 00:04:18.500 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.500 [2024-07-24 18:56:24.071087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.759 [2024-07-24 18:56:24.219631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.018 18:56:24 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:19.018 18:56:24 -- common/autotest_common.sh@864 -- # return 0 00:04:19.018 18:56:24 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:19.018 18:56:24 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:19.018 18:56:24 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:04:23.208 nvme0n1 00:04:23.208 18:56:28 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:23.208 [2024-07-24 18:56:28.548911] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:23.208 [2024-07-24 18:56:28.548979] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:23.208 request: 00:04:23.208 { 00:04:23.208 "nvme_ctrlr_name": "nvme0", 00:04:23.208 "password": "test", 00:04:23.208 "method": "bdev_nvme_opal_revert", 00:04:23.208 "req_id": 1 00:04:23.208 } 00:04:23.208 Got JSON-RPC error response 00:04:23.208 response: 00:04:23.208 { 00:04:23.208 "code": -32603, 00:04:23.208 "message": "Internal error" 00:04:23.208 } 00:04:23.208 18:56:28 -- common/autotest_common.sh@1604 -- # true 00:04:23.208 18:56:28 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:23.208 18:56:28 -- common/autotest_common.sh@1608 -- # killprocess 1520261 00:04:23.208 18:56:28 -- common/autotest_common.sh@950 -- # '[' -z 1520261 ']' 00:04:23.208 18:56:28 -- common/autotest_common.sh@954 -- # kill -0 1520261 00:04:23.208 18:56:28 -- common/autotest_common.sh@955 -- # uname 00:04:23.208 18:56:28 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.208 18:56:28 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1520261 00:04:23.208 18:56:28 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.208 18:56:28 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.208 18:56:28 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1520261' 00:04:23.208 killing process with pid 1520261 00:04:23.208 18:56:28 -- common/autotest_common.sh@969 -- # kill 1520261 00:04:23.208 18:56:28 -- common/autotest_common.sh@974 -- # wait 1520261 00:04:25.111 18:56:30 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:25.111 18:56:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:25.111 18:56:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:25.111 18:56:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:25.111 18:56:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:25.111 18:56:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.111 18:56:30 -- common/autotest_common.sh@10 -- # set +x 00:04:25.111 18:56:30 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:25.111 18:56:30 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.111 18:56:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.111 18:56:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.111 18:56:30 -- common/autotest_common.sh@10 -- # set +x 00:04:25.111 ************************************ 00:04:25.111 START TEST env 00:04:25.111 ************************************ 00:04:25.111 18:56:30 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.111 * Looking for test storage... 00:04:25.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:25.112 18:56:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.112 18:56:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.112 18:56:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.112 18:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.112 ************************************ 00:04:25.112 START TEST env_memory 00:04:25.112 ************************************ 00:04:25.112 18:56:30 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.112 00:04:25.112 00:04:25.112 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.112 http://cunit.sourceforge.net/ 00:04:25.112 00:04:25.112 00:04:25.112 Suite: memory 00:04:25.112 Test: alloc and free memory map ...[2024-07-24 18:56:30.764013] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.112 passed 00:04:25.112 Test: mem map translation ...[2024-07-24 18:56:30.793273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.112 [2024-07-24 18:56:30.793307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.112 [2024-07-24 18:56:30.793368] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.112 [2024-07-24 18:56:30.793385] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.372 passed 00:04:25.372 Test: mem map registration ...[2024-07-24 18:56:30.854888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:25.372 [2024-07-24 18:56:30.854919] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:25.372 passed 00:04:25.372 Test: mem map adjacent registrations ...passed 00:04:25.372 00:04:25.372 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.372 suites 1 1 n/a 0 0 00:04:25.372 tests 4 4 4 0 0 00:04:25.372 asserts 152 152 152 0 n/a 00:04:25.372 00:04:25.372 Elapsed time = 0.204 seconds 00:04:25.372 00:04:25.372 real 0m0.213s 00:04:25.372 user 0m0.204s 00:04:25.372 sys 0m0.008s 00:04:25.372 18:56:30 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.372 18:56:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:25.372 ************************************ 00:04:25.372 END TEST env_memory 00:04:25.372 ************************************ 00:04:25.372 18:56:30 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.372 18:56:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.372 18:56:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.372 18:56:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.372 ************************************ 00:04:25.372 START TEST env_vtophys 00:04:25.372 ************************************ 00:04:25.372 18:56:31 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.372 EAL: lib.eal log level changed from notice to debug 00:04:25.372 EAL: Detected lcore 0 as core 0 on socket 0 00:04:25.372 EAL: Detected lcore 1 as core 1 on socket 0 00:04:25.372 EAL: Detected lcore 2 as core 2 on socket 0 00:04:25.372 EAL: Detected lcore 3 as core 3 on socket 0 00:04:25.372 EAL: Detected lcore 4 as core 4 on socket 0 00:04:25.372 EAL: Detected lcore 5 as core 5 on socket 0 00:04:25.372 EAL: Detected lcore 6 as core 8 on socket 0 00:04:25.372 EAL: Detected lcore 7 as core 9 on socket 0 00:04:25.372 EAL: Detected lcore 8 as core 10 on socket 0 00:04:25.372 EAL: Detected lcore 9 as core 11 on socket 0 00:04:25.372 EAL: Detected lcore 10 as core 12 on socket 0 00:04:25.372 EAL: Detected lcore 11 as core 13 on socket 0 00:04:25.372 EAL: Detected lcore 12 as core 0 on socket 1 00:04:25.372 EAL: Detected lcore 13 as core 1 on socket 1 00:04:25.372 EAL: Detected lcore 14 as core 2 on socket 1 00:04:25.372 EAL: Detected lcore 15 as core 3 on socket 1 00:04:25.372 EAL: Detected lcore 16 as core 4 on socket 1 00:04:25.372 EAL: Detected lcore 17 as core 5 on socket 1 00:04:25.372 EAL: Detected lcore 18 as core 8 on socket 1 00:04:25.372 EAL: Detected lcore 19 as core 9 on socket 1 00:04:25.372 EAL: Detected lcore 20 as core 10 on socket 1 00:04:25.372 EAL: Detected lcore 21 as core 11 on socket 1 00:04:25.372 EAL: Detected lcore 22 as core 12 on socket 1 00:04:25.372 EAL: Detected lcore 23 as core 13 on socket 1 00:04:25.372 EAL: Detected lcore 24 as core 0 on socket 0 00:04:25.372 EAL: Detected lcore 25 as core 1 on socket 0 00:04:25.372 EAL: Detected lcore 26 as core 2 on socket 0 00:04:25.372 EAL: Detected lcore 27 as core 3 on socket 0 00:04:25.372 EAL: Detected lcore 28 as core 4 on socket 0 00:04:25.372 EAL: Detected lcore 29 as core 5 on socket 0 00:04:25.372 EAL: Detected lcore 30 as core 8 on socket 0 00:04:25.372 EAL: Detected lcore 31 as core 9 on socket 0 00:04:25.372 EAL: Detected lcore 32 as core 10 on socket 0 00:04:25.372 EAL: Detected lcore 33 as core 11 on socket 0 00:04:25.372 EAL: Detected lcore 34 as core 12 on socket 0 00:04:25.372 EAL: Detected lcore 35 as core 13 on socket 0 00:04:25.372 EAL: Detected lcore 36 as core 0 on socket 1 00:04:25.372 EAL: Detected lcore 37 as core 1 on socket 1 00:04:25.372 EAL: Detected lcore 38 as core 2 on socket 1 00:04:25.372 EAL: Detected lcore 39 as core 3 on socket 1 00:04:25.372 EAL: Detected lcore 40 as core 4 on socket 1 00:04:25.372 EAL: Detected lcore 41 as core 5 on socket 1 00:04:25.372 EAL: Detected lcore 42 as core 8 on socket 1 00:04:25.372 EAL: Detected lcore 43 as core 9 on socket 1 00:04:25.372 EAL: Detected lcore 44 as core 10 on socket 1 00:04:25.372 EAL: Detected lcore 45 as core 11 on socket 1 00:04:25.372 EAL: Detected lcore 46 as core 12 on socket 1 00:04:25.372 EAL: Detected lcore 47 as core 13 on socket 1 00:04:25.372 EAL: Maximum logical cores by configuration: 128 00:04:25.372 EAL: Detected CPU lcores: 48 00:04:25.372 EAL: Detected NUMA nodes: 2 00:04:25.372 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:25.372 EAL: Detected shared linkage of DPDK 00:04:25.372 EAL: No shared files mode enabled, IPC will be disabled 00:04:25.631 EAL: Bus pci wants IOVA as 'DC' 00:04:25.631 EAL: Buses did not request a specific IOVA mode. 00:04:25.631 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:25.631 EAL: Selected IOVA mode 'VA' 00:04:25.631 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.631 EAL: Probing VFIO support... 00:04:25.631 EAL: IOMMU type 1 (Type 1) is supported 00:04:25.631 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:25.631 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:25.631 EAL: VFIO support initialized 00:04:25.631 EAL: Ask a virtual area of 0x2e000 bytes 00:04:25.631 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:25.631 EAL: Setting up physically contiguous memory... 00:04:25.631 EAL: Setting maximum number of open files to 524288 00:04:25.631 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:25.631 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:25.631 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:25.631 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:25.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:25.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:25.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:25.632 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:25.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:25.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:25.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:25.632 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.632 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:25.632 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.632 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.632 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:25.632 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:25.632 EAL: Hugepages will be freed exactly as allocated. 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: TSC frequency is ~2700000 KHz 00:04:25.632 EAL: Main lcore 0 is ready (tid=7fb0bc129a00;cpuset=[0]) 00:04:25.632 EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 0 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 2MB 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:25.632 EAL: Mem event callback 'spdk:(nil)' registered 00:04:25.632 00:04:25.632 00:04:25.632 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.632 http://cunit.sourceforge.net/ 00:04:25.632 00:04:25.632 00:04:25.632 Suite: components_suite 00:04:25.632 Test: vtophys_malloc_test ...passed 00:04:25.632 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 4 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.632 EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 4 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.632 EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 4 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.632 EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 4 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.632 EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 4 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.632 EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 4 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 66MB 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was shrunk by 66MB 00:04:25.632 EAL: Trying to obtain current memory policy. 00:04:25.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.632 EAL: Restoring previous memory policy: 4 00:04:25.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.632 EAL: request: mp_malloc_sync 00:04:25.632 EAL: No shared files mode enabled, IPC is disabled 00:04:25.632 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.890 EAL: request: mp_malloc_sync 00:04:25.890 EAL: No shared files mode enabled, IPC is disabled 00:04:25.890 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.890 EAL: Trying to obtain current memory policy. 00:04:25.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.890 EAL: Restoring previous memory policy: 4 00:04:25.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.890 EAL: request: mp_malloc_sync 00:04:25.890 EAL: No shared files mode enabled, IPC is disabled 00:04:25.890 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.149 EAL: request: mp_malloc_sync 00:04:26.149 EAL: No shared files mode enabled, IPC is disabled 00:04:26.149 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.149 EAL: Trying to obtain current memory policy. 00:04:26.149 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.409 EAL: Restoring previous memory policy: 4 00:04:26.409 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.409 EAL: request: mp_malloc_sync 00:04:26.409 EAL: No shared files mode enabled, IPC is disabled 00:04:26.409 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.409 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.409 EAL: request: mp_malloc_sync 00:04:26.409 EAL: No shared files mode enabled, IPC is disabled 00:04:26.409 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.409 EAL: Trying to obtain current memory policy. 00:04:26.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.976 EAL: Restoring previous memory policy: 4 00:04:26.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.976 EAL: request: mp_malloc_sync 00:04:26.976 EAL: No shared files mode enabled, IPC is disabled 00:04:26.976 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.234 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.493 EAL: request: mp_malloc_sync 00:04:27.493 EAL: No shared files mode enabled, IPC is disabled 00:04:27.493 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:27.493 passed 00:04:27.493 00:04:27.493 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.493 suites 1 1 n/a 0 0 00:04:27.493 tests 2 2 2 0 0 00:04:27.493 asserts 497 497 497 0 n/a 00:04:27.493 00:04:27.493 Elapsed time = 1.869 seconds 00:04:27.493 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.493 EAL: request: mp_malloc_sync 00:04:27.493 EAL: No shared files mode enabled, IPC is disabled 00:04:27.493 EAL: Heap on socket 0 was shrunk by 2MB 00:04:27.493 EAL: No shared files mode enabled, IPC is disabled 00:04:27.493 EAL: No shared files mode enabled, IPC is disabled 00:04:27.493 EAL: No shared files mode enabled, IPC is disabled 00:04:27.493 00:04:27.493 real 0m2.101s 00:04:27.493 user 0m1.054s 00:04:27.493 sys 0m0.995s 00:04:27.493 18:56:33 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.493 18:56:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:27.493 ************************************ 00:04:27.493 END TEST env_vtophys 00:04:27.493 ************************************ 00:04:27.493 18:56:33 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.493 18:56:33 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.493 18:56:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.493 18:56:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.493 ************************************ 00:04:27.493 START TEST env_pci 00:04:27.493 ************************************ 00:04:27.493 18:56:33 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:27.493 00:04:27.493 00:04:27.493 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.493 http://cunit.sourceforge.net/ 00:04:27.493 00:04:27.493 00:04:27.493 Suite: pci 00:04:27.493 Test: pci_hook ...[2024-07-24 18:56:33.186898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1521410 has claimed it 00:04:27.753 EAL: Cannot find device (10000:00:01.0) 00:04:27.753 EAL: Failed to attach device on primary process 00:04:27.753 passed 00:04:27.753 00:04:27.753 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.753 suites 1 1 n/a 0 0 00:04:27.753 tests 1 1 1 0 0 00:04:27.753 asserts 25 25 25 0 n/a 00:04:27.753 00:04:27.753 Elapsed time = 0.028 seconds 00:04:27.753 00:04:27.753 real 0m0.042s 00:04:27.753 user 0m0.012s 00:04:27.753 sys 0m0.030s 00:04:27.753 18:56:33 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.753 18:56:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:27.753 ************************************ 00:04:27.753 END TEST env_pci 00:04:27.753 ************************************ 00:04:27.753 18:56:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:27.753 18:56:33 env -- env/env.sh@15 -- # uname 00:04:27.753 18:56:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:27.753 18:56:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:27.753 18:56:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.753 18:56:33 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:27.753 18:56:33 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.753 18:56:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.753 ************************************ 00:04:27.753 START TEST env_dpdk_post_init 00:04:27.753 ************************************ 00:04:27.753 18:56:33 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.753 EAL: Detected CPU lcores: 48 00:04:27.753 EAL: Detected NUMA nodes: 2 00:04:27.753 EAL: Detected shared linkage of DPDK 00:04:27.753 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.753 EAL: Selected IOVA mode 'VA' 00:04:27.753 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.753 EAL: VFIO support initialized 00:04:27.753 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.014 EAL: Using IOMMU type 1 (Type 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:28.014 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:28.949 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:04:32.234 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:04:32.234 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:04:32.234 Starting DPDK initialization... 00:04:32.234 Starting SPDK post initialization... 00:04:32.234 SPDK NVMe probe 00:04:32.234 Attaching to 0000:82:00.0 00:04:32.234 Attached to 0000:82:00.0 00:04:32.234 Cleaning up... 00:04:32.234 00:04:32.234 real 0m4.512s 00:04:32.234 user 0m3.290s 00:04:32.234 sys 0m0.268s 00:04:32.234 18:56:37 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.234 18:56:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 ************************************ 00:04:32.234 END TEST env_dpdk_post_init 00:04:32.234 ************************************ 00:04:32.234 18:56:37 env -- env/env.sh@26 -- # uname 00:04:32.234 18:56:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.234 18:56:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.234 18:56:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.234 18:56:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.234 18:56:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.234 ************************************ 00:04:32.234 START TEST env_mem_callbacks 00:04:32.234 ************************************ 00:04:32.234 18:56:37 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.234 EAL: Detected CPU lcores: 48 00:04:32.234 EAL: Detected NUMA nodes: 2 00:04:32.234 EAL: Detected shared linkage of DPDK 00:04:32.234 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.493 EAL: Selected IOVA mode 'VA' 00:04:32.493 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.493 EAL: VFIO support initialized 00:04:32.493 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.493 00:04:32.493 00:04:32.493 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.493 http://cunit.sourceforge.net/ 00:04:32.493 00:04:32.493 00:04:32.493 Suite: memory 00:04:32.493 Test: test ... 00:04:32.493 register 0x200000200000 2097152 00:04:32.493 malloc 3145728 00:04:32.493 register 0x200000400000 4194304 00:04:32.493 buf 0x200000500000 len 3145728 PASSED 00:04:32.493 malloc 64 00:04:32.493 buf 0x2000004fff40 len 64 PASSED 00:04:32.493 malloc 4194304 00:04:32.493 register 0x200000800000 6291456 00:04:32.493 buf 0x200000a00000 len 4194304 PASSED 00:04:32.493 free 0x200000500000 3145728 00:04:32.493 free 0x2000004fff40 64 00:04:32.493 unregister 0x200000400000 4194304 PASSED 00:04:32.493 free 0x200000a00000 4194304 00:04:32.493 unregister 0x200000800000 6291456 PASSED 00:04:32.493 malloc 8388608 00:04:32.493 register 0x200000400000 10485760 00:04:32.493 buf 0x200000600000 len 8388608 PASSED 00:04:32.493 free 0x200000600000 8388608 00:04:32.493 unregister 0x200000400000 10485760 PASSED 00:04:32.493 passed 00:04:32.493 00:04:32.493 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.493 suites 1 1 n/a 0 0 00:04:32.493 tests 1 1 1 0 0 00:04:32.493 asserts 15 15 15 0 n/a 00:04:32.493 00:04:32.493 Elapsed time = 0.009 seconds 00:04:32.493 00:04:32.493 real 0m0.096s 00:04:32.493 user 0m0.026s 00:04:32.493 sys 0m0.069s 00:04:32.493 18:56:37 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.493 18:56:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.493 ************************************ 00:04:32.493 END TEST env_mem_callbacks 00:04:32.493 ************************************ 00:04:32.493 00:04:32.493 real 0m7.363s 00:04:32.493 user 0m4.736s 00:04:32.493 sys 0m1.642s 00:04:32.493 18:56:37 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.493 18:56:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.493 ************************************ 00:04:32.493 END TEST env 00:04:32.493 ************************************ 00:04:32.493 18:56:38 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.493 18:56:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.493 18:56:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.493 18:56:38 -- common/autotest_common.sh@10 -- # set +x 00:04:32.493 ************************************ 00:04:32.493 START TEST rpc 00:04:32.493 ************************************ 00:04:32.493 18:56:38 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.493 * Looking for test storage... 00:04:32.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:32.493 18:56:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1522071 00:04:32.493 18:56:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:32.493 18:56:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.493 18:56:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1522071 00:04:32.493 18:56:38 rpc -- common/autotest_common.sh@831 -- # '[' -z 1522071 ']' 00:04:32.493 18:56:38 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.493 18:56:38 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.493 18:56:38 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.493 18:56:38 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.493 18:56:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.752 [2024-07-24 18:56:38.211490] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:04:32.752 [2024-07-24 18:56:38.211590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522071 ] 00:04:32.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.752 [2024-07-24 18:56:38.289980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.752 [2024-07-24 18:56:38.424326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.752 [2024-07-24 18:56:38.424400] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1522071' to capture a snapshot of events at runtime. 00:04:32.752 [2024-07-24 18:56:38.424418] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:32.752 [2024-07-24 18:56:38.424446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:32.752 [2024-07-24 18:56:38.424461] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1522071 for offline analysis/debug. 00:04:32.752 [2024-07-24 18:56:38.424506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.320 18:56:38 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.320 18:56:38 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:33.320 18:56:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.320 18:56:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:33.320 18:56:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.320 18:56:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.320 18:56:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.320 18:56:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.320 18:56:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.320 ************************************ 00:04:33.320 START TEST rpc_integrity 00:04:33.320 ************************************ 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.320 { 00:04:33.320 "name": "Malloc0", 00:04:33.320 "aliases": [ 00:04:33.320 "242c4527-22b3-40a7-842d-ddf39c1577ec" 00:04:33.320 ], 00:04:33.320 "product_name": "Malloc disk", 00:04:33.320 "block_size": 512, 00:04:33.320 "num_blocks": 16384, 00:04:33.320 "uuid": "242c4527-22b3-40a7-842d-ddf39c1577ec", 00:04:33.320 "assigned_rate_limits": { 00:04:33.320 "rw_ios_per_sec": 0, 00:04:33.320 "rw_mbytes_per_sec": 0, 00:04:33.320 "r_mbytes_per_sec": 0, 00:04:33.320 "w_mbytes_per_sec": 0 00:04:33.320 }, 00:04:33.320 "claimed": false, 00:04:33.320 "zoned": false, 00:04:33.320 "supported_io_types": { 00:04:33.320 "read": true, 00:04:33.320 "write": true, 00:04:33.320 "unmap": true, 00:04:33.320 "flush": true, 00:04:33.320 "reset": true, 00:04:33.320 "nvme_admin": false, 00:04:33.320 "nvme_io": false, 00:04:33.320 "nvme_io_md": false, 00:04:33.320 "write_zeroes": true, 00:04:33.320 "zcopy": true, 00:04:33.320 "get_zone_info": false, 00:04:33.320 "zone_management": false, 00:04:33.320 "zone_append": false, 00:04:33.320 "compare": false, 00:04:33.320 "compare_and_write": false, 00:04:33.320 "abort": true, 00:04:33.320 "seek_hole": false, 00:04:33.320 "seek_data": false, 00:04:33.320 "copy": true, 00:04:33.320 "nvme_iov_md": false 00:04:33.320 }, 00:04:33.320 "memory_domains": [ 00:04:33.320 { 00:04:33.320 "dma_device_id": "system", 00:04:33.320 "dma_device_type": 1 00:04:33.320 }, 00:04:33.320 { 00:04:33.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.320 "dma_device_type": 2 00:04:33.320 } 00:04:33.320 ], 00:04:33.320 "driver_specific": {} 00:04:33.320 } 00:04:33.320 ]' 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.320 18:56:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.320 18:56:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.320 [2024-07-24 18:56:38.997565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.320 [2024-07-24 18:56:38.997628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.320 [2024-07-24 18:56:38.997657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23773e0 00:04:33.320 [2024-07-24 18:56:38.997676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.320 [2024-07-24 18:56:39.000509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.320 [2024-07-24 18:56:39.000543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.320 Passthru0 00:04:33.320 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.320 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.320 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.320 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.578 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.579 { 00:04:33.579 "name": "Malloc0", 00:04:33.579 "aliases": [ 00:04:33.579 "242c4527-22b3-40a7-842d-ddf39c1577ec" 00:04:33.579 ], 00:04:33.579 "product_name": "Malloc disk", 00:04:33.579 "block_size": 512, 00:04:33.579 "num_blocks": 16384, 00:04:33.579 "uuid": "242c4527-22b3-40a7-842d-ddf39c1577ec", 00:04:33.579 "assigned_rate_limits": { 00:04:33.579 "rw_ios_per_sec": 0, 00:04:33.579 "rw_mbytes_per_sec": 0, 00:04:33.579 "r_mbytes_per_sec": 0, 00:04:33.579 "w_mbytes_per_sec": 0 00:04:33.579 }, 00:04:33.579 "claimed": true, 00:04:33.579 "claim_type": "exclusive_write", 00:04:33.579 "zoned": false, 00:04:33.579 "supported_io_types": { 00:04:33.579 "read": true, 00:04:33.579 "write": true, 00:04:33.579 "unmap": true, 00:04:33.579 "flush": true, 00:04:33.579 "reset": true, 00:04:33.579 "nvme_admin": false, 00:04:33.579 "nvme_io": false, 00:04:33.579 "nvme_io_md": false, 00:04:33.579 "write_zeroes": true, 00:04:33.579 "zcopy": true, 00:04:33.579 "get_zone_info": false, 00:04:33.579 "zone_management": false, 00:04:33.579 "zone_append": false, 00:04:33.579 "compare": false, 00:04:33.579 "compare_and_write": false, 00:04:33.579 "abort": true, 00:04:33.579 "seek_hole": false, 00:04:33.579 "seek_data": false, 00:04:33.579 "copy": true, 00:04:33.579 "nvme_iov_md": false 00:04:33.579 }, 00:04:33.579 "memory_domains": [ 00:04:33.579 { 00:04:33.579 "dma_device_id": "system", 00:04:33.579 "dma_device_type": 1 00:04:33.579 }, 00:04:33.579 { 00:04:33.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.579 "dma_device_type": 2 00:04:33.579 } 00:04:33.579 ], 00:04:33.579 "driver_specific": {} 00:04:33.579 }, 00:04:33.579 { 00:04:33.579 "name": "Passthru0", 00:04:33.579 "aliases": [ 00:04:33.579 "efb488bc-6415-5576-85c3-596682afdaa3" 00:04:33.579 ], 00:04:33.579 "product_name": "passthru", 00:04:33.579 "block_size": 512, 00:04:33.579 "num_blocks": 16384, 00:04:33.579 "uuid": "efb488bc-6415-5576-85c3-596682afdaa3", 00:04:33.579 "assigned_rate_limits": { 00:04:33.579 "rw_ios_per_sec": 0, 00:04:33.579 "rw_mbytes_per_sec": 0, 00:04:33.579 "r_mbytes_per_sec": 0, 00:04:33.579 "w_mbytes_per_sec": 0 00:04:33.579 }, 00:04:33.579 "claimed": false, 00:04:33.579 "zoned": false, 00:04:33.579 "supported_io_types": { 00:04:33.579 "read": true, 00:04:33.579 "write": true, 00:04:33.579 "unmap": true, 00:04:33.579 "flush": true, 00:04:33.579 "reset": true, 00:04:33.579 "nvme_admin": false, 00:04:33.579 "nvme_io": false, 00:04:33.579 "nvme_io_md": false, 00:04:33.579 "write_zeroes": true, 00:04:33.579 "zcopy": true, 00:04:33.579 "get_zone_info": false, 00:04:33.579 "zone_management": false, 00:04:33.579 "zone_append": false, 00:04:33.579 "compare": false, 00:04:33.579 "compare_and_write": false, 00:04:33.579 "abort": true, 00:04:33.579 "seek_hole": false, 00:04:33.579 "seek_data": false, 00:04:33.579 "copy": true, 00:04:33.579 "nvme_iov_md": false 00:04:33.579 }, 00:04:33.579 "memory_domains": [ 00:04:33.579 { 00:04:33.579 "dma_device_id": "system", 00:04:33.579 "dma_device_type": 1 00:04:33.579 }, 00:04:33.579 { 00:04:33.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.579 "dma_device_type": 2 00:04:33.579 } 00:04:33.579 ], 00:04:33.579 "driver_specific": { 00:04:33.579 "passthru": { 00:04:33.579 "name": "Passthru0", 00:04:33.579 "base_bdev_name": "Malloc0" 00:04:33.579 } 00:04:33.579 } 00:04:33.579 } 00:04:33.579 ]' 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.579 18:56:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.579 00:04:33.579 real 0m0.342s 00:04:33.579 user 0m0.241s 00:04:33.579 sys 0m0.030s 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.579 18:56:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.579 ************************************ 00:04:33.579 END TEST rpc_integrity 00:04:33.579 ************************************ 00:04:33.579 18:56:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.579 18:56:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.579 18:56:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.579 18:56:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.579 ************************************ 00:04:33.579 START TEST rpc_plugins 00:04:33.579 ************************************ 00:04:33.579 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:33.579 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.579 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.579 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.579 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.579 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.579 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.579 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.579 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.837 { 00:04:33.837 "name": "Malloc1", 00:04:33.837 "aliases": [ 00:04:33.837 "9cf5741a-4f2d-45fb-b53e-77b1ebd1e035" 00:04:33.837 ], 00:04:33.837 "product_name": "Malloc disk", 00:04:33.837 "block_size": 4096, 00:04:33.837 "num_blocks": 256, 00:04:33.837 "uuid": "9cf5741a-4f2d-45fb-b53e-77b1ebd1e035", 00:04:33.837 "assigned_rate_limits": { 00:04:33.837 "rw_ios_per_sec": 0, 00:04:33.837 "rw_mbytes_per_sec": 0, 00:04:33.837 "r_mbytes_per_sec": 0, 00:04:33.837 "w_mbytes_per_sec": 0 00:04:33.837 }, 00:04:33.837 "claimed": false, 00:04:33.837 "zoned": false, 00:04:33.837 "supported_io_types": { 00:04:33.837 "read": true, 00:04:33.837 "write": true, 00:04:33.837 "unmap": true, 00:04:33.837 "flush": true, 00:04:33.837 "reset": true, 00:04:33.837 "nvme_admin": false, 00:04:33.837 "nvme_io": false, 00:04:33.837 "nvme_io_md": false, 00:04:33.837 "write_zeroes": true, 00:04:33.837 "zcopy": true, 00:04:33.837 "get_zone_info": false, 00:04:33.837 "zone_management": false, 00:04:33.837 "zone_append": false, 00:04:33.837 "compare": false, 00:04:33.837 "compare_and_write": false, 00:04:33.837 "abort": true, 00:04:33.837 "seek_hole": false, 00:04:33.837 "seek_data": false, 00:04:33.837 "copy": true, 00:04:33.837 "nvme_iov_md": false 00:04:33.837 }, 00:04:33.837 "memory_domains": [ 00:04:33.837 { 00:04:33.837 "dma_device_id": "system", 00:04:33.837 "dma_device_type": 1 00:04:33.837 }, 00:04:33.837 { 00:04:33.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.837 "dma_device_type": 2 00:04:33.837 } 00:04:33.837 ], 00:04:33.837 "driver_specific": {} 00:04:33.837 } 00:04:33.837 ]' 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:33.837 18:56:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:33.837 00:04:33.837 real 0m0.199s 00:04:33.837 user 0m0.150s 00:04:33.837 sys 0m0.016s 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.837 18:56:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.837 ************************************ 00:04:33.837 END TEST rpc_plugins 00:04:33.837 ************************************ 00:04:33.837 18:56:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:33.837 18:56:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.837 18:56:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.837 18:56:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.837 ************************************ 00:04:33.837 START TEST rpc_trace_cmd_test 00:04:33.837 ************************************ 00:04:33.837 18:56:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:33.837 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:33.837 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:33.837 18:56:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.837 18:56:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.095 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1522071", 00:04:34.095 "tpoint_group_mask": "0x8", 00:04:34.095 "iscsi_conn": { 00:04:34.095 "mask": "0x2", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "scsi": { 00:04:34.095 "mask": "0x4", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "bdev": { 00:04:34.095 "mask": "0x8", 00:04:34.095 "tpoint_mask": "0xffffffffffffffff" 00:04:34.095 }, 00:04:34.095 "nvmf_rdma": { 00:04:34.095 "mask": "0x10", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "nvmf_tcp": { 00:04:34.095 "mask": "0x20", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "ftl": { 00:04:34.095 "mask": "0x40", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "blobfs": { 00:04:34.095 "mask": "0x80", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "dsa": { 00:04:34.095 "mask": "0x200", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "thread": { 00:04:34.095 "mask": "0x400", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "nvme_pcie": { 00:04:34.095 "mask": "0x800", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "iaa": { 00:04:34.095 "mask": "0x1000", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "nvme_tcp": { 00:04:34.095 "mask": "0x2000", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "bdev_nvme": { 00:04:34.095 "mask": "0x4000", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 }, 00:04:34.095 "sock": { 00:04:34.095 "mask": "0x8000", 00:04:34.095 "tpoint_mask": "0x0" 00:04:34.095 } 00:04:34.095 }' 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.095 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.356 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.356 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.356 18:56:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.356 00:04:34.356 real 0m0.381s 00:04:34.356 user 0m0.343s 00:04:34.356 sys 0m0.028s 00:04:34.356 18:56:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.356 18:56:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.356 ************************************ 00:04:34.356 END TEST rpc_trace_cmd_test 00:04:34.356 ************************************ 00:04:34.356 18:56:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.356 18:56:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.356 18:56:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.356 18:56:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.356 18:56:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.356 18:56:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.356 ************************************ 00:04:34.356 START TEST rpc_daemon_integrity 00:04:34.356 ************************************ 00:04:34.356 18:56:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:34.356 18:56:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.356 18:56:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.356 18:56:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.356 18:56:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.356 18:56:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.356 18:56:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.356 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.356 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.356 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.356 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.613 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.613 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.613 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.613 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.613 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.613 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.613 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.613 { 00:04:34.613 "name": "Malloc2", 00:04:34.613 "aliases": [ 00:04:34.613 "2e5d4dff-3f3f-4451-9fcb-1ea783a5ce75" 00:04:34.613 ], 00:04:34.613 "product_name": "Malloc disk", 00:04:34.613 "block_size": 512, 00:04:34.613 "num_blocks": 16384, 00:04:34.613 "uuid": "2e5d4dff-3f3f-4451-9fcb-1ea783a5ce75", 00:04:34.613 "assigned_rate_limits": { 00:04:34.613 "rw_ios_per_sec": 0, 00:04:34.613 "rw_mbytes_per_sec": 0, 00:04:34.613 "r_mbytes_per_sec": 0, 00:04:34.613 "w_mbytes_per_sec": 0 00:04:34.613 }, 00:04:34.613 "claimed": false, 00:04:34.614 "zoned": false, 00:04:34.614 "supported_io_types": { 00:04:34.614 "read": true, 00:04:34.614 "write": true, 00:04:34.614 "unmap": true, 00:04:34.614 "flush": true, 00:04:34.614 "reset": true, 00:04:34.614 "nvme_admin": false, 00:04:34.614 "nvme_io": false, 00:04:34.614 "nvme_io_md": false, 00:04:34.614 "write_zeroes": true, 00:04:34.614 "zcopy": true, 00:04:34.614 "get_zone_info": false, 00:04:34.614 "zone_management": false, 00:04:34.614 "zone_append": false, 00:04:34.614 "compare": false, 00:04:34.614 "compare_and_write": false, 00:04:34.614 "abort": true, 00:04:34.614 "seek_hole": false, 00:04:34.614 "seek_data": false, 00:04:34.614 "copy": true, 00:04:34.614 "nvme_iov_md": false 00:04:34.614 }, 00:04:34.614 "memory_domains": [ 00:04:34.614 { 00:04:34.614 "dma_device_id": "system", 00:04:34.614 "dma_device_type": 1 00:04:34.614 }, 00:04:34.614 { 00:04:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.614 "dma_device_type": 2 00:04:34.614 } 00:04:34.614 ], 00:04:34.614 "driver_specific": {} 00:04:34.614 } 00:04:34.614 ]' 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.614 [2024-07-24 18:56:40.112508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.614 [2024-07-24 18:56:40.112562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.614 [2024-07-24 18:56:40.112602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2377610 00:04:34.614 [2024-07-24 18:56:40.112622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.614 [2024-07-24 18:56:40.115260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.614 [2024-07-24 18:56:40.115323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.614 Passthru0 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.614 { 00:04:34.614 "name": "Malloc2", 00:04:34.614 "aliases": [ 00:04:34.614 "2e5d4dff-3f3f-4451-9fcb-1ea783a5ce75" 00:04:34.614 ], 00:04:34.614 "product_name": "Malloc disk", 00:04:34.614 "block_size": 512, 00:04:34.614 "num_blocks": 16384, 00:04:34.614 "uuid": "2e5d4dff-3f3f-4451-9fcb-1ea783a5ce75", 00:04:34.614 "assigned_rate_limits": { 00:04:34.614 "rw_ios_per_sec": 0, 00:04:34.614 "rw_mbytes_per_sec": 0, 00:04:34.614 "r_mbytes_per_sec": 0, 00:04:34.614 "w_mbytes_per_sec": 0 00:04:34.614 }, 00:04:34.614 "claimed": true, 00:04:34.614 "claim_type": "exclusive_write", 00:04:34.614 "zoned": false, 00:04:34.614 "supported_io_types": { 00:04:34.614 "read": true, 00:04:34.614 "write": true, 00:04:34.614 "unmap": true, 00:04:34.614 "flush": true, 00:04:34.614 "reset": true, 00:04:34.614 "nvme_admin": false, 00:04:34.614 "nvme_io": false, 00:04:34.614 "nvme_io_md": false, 00:04:34.614 "write_zeroes": true, 00:04:34.614 "zcopy": true, 00:04:34.614 "get_zone_info": false, 00:04:34.614 "zone_management": false, 00:04:34.614 "zone_append": false, 00:04:34.614 "compare": false, 00:04:34.614 "compare_and_write": false, 00:04:34.614 "abort": true, 00:04:34.614 "seek_hole": false, 00:04:34.614 "seek_data": false, 00:04:34.614 "copy": true, 00:04:34.614 "nvme_iov_md": false 00:04:34.614 }, 00:04:34.614 "memory_domains": [ 00:04:34.614 { 00:04:34.614 "dma_device_id": "system", 00:04:34.614 "dma_device_type": 1 00:04:34.614 }, 00:04:34.614 { 00:04:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.614 "dma_device_type": 2 00:04:34.614 } 00:04:34.614 ], 00:04:34.614 "driver_specific": {} 00:04:34.614 }, 00:04:34.614 { 00:04:34.614 "name": "Passthru0", 00:04:34.614 "aliases": [ 00:04:34.614 "7fe2716e-ba54-55ec-8bc6-e465414aa387" 00:04:34.614 ], 00:04:34.614 "product_name": "passthru", 00:04:34.614 "block_size": 512, 00:04:34.614 "num_blocks": 16384, 00:04:34.614 "uuid": "7fe2716e-ba54-55ec-8bc6-e465414aa387", 00:04:34.614 "assigned_rate_limits": { 00:04:34.614 "rw_ios_per_sec": 0, 00:04:34.614 "rw_mbytes_per_sec": 0, 00:04:34.614 "r_mbytes_per_sec": 0, 00:04:34.614 "w_mbytes_per_sec": 0 00:04:34.614 }, 00:04:34.614 "claimed": false, 00:04:34.614 "zoned": false, 00:04:34.614 "supported_io_types": { 00:04:34.614 "read": true, 00:04:34.614 "write": true, 00:04:34.614 "unmap": true, 00:04:34.614 "flush": true, 00:04:34.614 "reset": true, 00:04:34.614 "nvme_admin": false, 00:04:34.614 "nvme_io": false, 00:04:34.614 "nvme_io_md": false, 00:04:34.614 "write_zeroes": true, 00:04:34.614 "zcopy": true, 00:04:34.614 "get_zone_info": false, 00:04:34.614 "zone_management": false, 00:04:34.614 "zone_append": false, 00:04:34.614 "compare": false, 00:04:34.614 "compare_and_write": false, 00:04:34.614 "abort": true, 00:04:34.614 "seek_hole": false, 00:04:34.614 "seek_data": false, 00:04:34.614 "copy": true, 00:04:34.614 "nvme_iov_md": false 00:04:34.614 }, 00:04:34.614 "memory_domains": [ 00:04:34.614 { 00:04:34.614 "dma_device_id": "system", 00:04:34.614 "dma_device_type": 1 00:04:34.614 }, 00:04:34.614 { 00:04:34.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.614 "dma_device_type": 2 00:04:34.614 } 00:04:34.614 ], 00:04:34.614 "driver_specific": { 00:04:34.614 "passthru": { 00:04:34.614 "name": "Passthru0", 00:04:34.614 "base_bdev_name": "Malloc2" 00:04:34.614 } 00:04:34.614 } 00:04:34.614 } 00:04:34.614 ]' 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.614 00:04:34.614 real 0m0.298s 00:04:34.614 user 0m0.201s 00:04:34.614 sys 0m0.041s 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.614 18:56:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.614 ************************************ 00:04:34.614 END TEST rpc_daemon_integrity 00:04:34.614 ************************************ 00:04:34.614 18:56:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.614 18:56:40 rpc -- rpc/rpc.sh@84 -- # killprocess 1522071 00:04:34.614 18:56:40 rpc -- common/autotest_common.sh@950 -- # '[' -z 1522071 ']' 00:04:34.614 18:56:40 rpc -- common/autotest_common.sh@954 -- # kill -0 1522071 00:04:34.614 18:56:40 rpc -- common/autotest_common.sh@955 -- # uname 00:04:34.614 18:56:40 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:34.614 18:56:40 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522071 00:04:34.614 18:56:40 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:34.872 18:56:40 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:34.872 18:56:40 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522071' 00:04:34.872 killing process with pid 1522071 00:04:34.872 18:56:40 rpc -- common/autotest_common.sh@969 -- # kill 1522071 00:04:34.872 18:56:40 rpc -- common/autotest_common.sh@974 -- # wait 1522071 00:04:35.439 00:04:35.439 real 0m2.911s 00:04:35.439 user 0m3.722s 00:04:35.439 sys 0m0.857s 00:04:35.439 18:56:40 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.439 18:56:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.439 ************************************ 00:04:35.439 END TEST rpc 00:04:35.439 ************************************ 00:04:35.439 18:56:41 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:35.439 18:56:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.439 18:56:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.439 18:56:41 -- common/autotest_common.sh@10 -- # set +x 00:04:35.439 ************************************ 00:04:35.439 START TEST skip_rpc 00:04:35.439 ************************************ 00:04:35.439 18:56:41 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:35.439 * Looking for test storage... 00:04:35.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.439 18:56:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:35.439 18:56:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:35.439 18:56:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:35.439 18:56:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.439 18:56:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.439 18:56:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.439 ************************************ 00:04:35.439 START TEST skip_rpc 00:04:35.439 ************************************ 00:04:35.439 18:56:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:35.439 18:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1522547 00:04:35.439 18:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:35.439 18:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.439 18:56:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:35.698 [2024-07-24 18:56:41.183278] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:04:35.698 [2024-07-24 18:56:41.183373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522547 ] 00:04:35.698 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.698 [2024-07-24 18:56:41.283572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.957 [2024-07-24 18:56:41.487548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1522547 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1522547 ']' 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1522547 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1522547 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1522547' 00:04:41.231 killing process with pid 1522547 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1522547 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1522547 00:04:41.231 00:04:41.231 real 0m5.696s 00:04:41.231 user 0m5.187s 00:04:41.231 sys 0m0.493s 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.231 18:56:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 ************************************ 00:04:41.231 END TEST skip_rpc 00:04:41.231 ************************************ 00:04:41.231 18:56:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.231 18:56:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.231 18:56:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.231 18:56:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.231 ************************************ 00:04:41.231 START TEST skip_rpc_with_json 00:04:41.231 ************************************ 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1523208 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1523208 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1523208 ']' 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.231 18:56:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.490 [2024-07-24 18:56:46.947595] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:04:41.490 [2024-07-24 18:56:46.947695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523208 ] 00:04:41.490 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.490 [2024-07-24 18:56:47.038507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.763 [2024-07-24 18:56:47.236820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.700 [2024-07-24 18:56:48.125533] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.700 request: 00:04:42.700 { 00:04:42.700 "trtype": "tcp", 00:04:42.700 "method": "nvmf_get_transports", 00:04:42.700 "req_id": 1 00:04:42.700 } 00:04:42.700 Got JSON-RPC error response 00:04:42.700 response: 00:04:42.700 { 00:04:42.700 "code": -19, 00:04:42.700 "message": "No such device" 00:04:42.700 } 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.700 [2024-07-24 18:56:48.137736] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.700 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.700 { 00:04:42.700 "subsystems": [ 00:04:42.700 { 00:04:42.700 "subsystem": "vfio_user_target", 00:04:42.700 "config": null 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "subsystem": "keyring", 00:04:42.700 "config": [] 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "subsystem": "iobuf", 00:04:42.700 "config": [ 00:04:42.700 { 00:04:42.700 "method": "iobuf_set_options", 00:04:42.700 "params": { 00:04:42.700 "small_pool_count": 8192, 00:04:42.700 "large_pool_count": 1024, 00:04:42.700 "small_bufsize": 8192, 00:04:42.700 "large_bufsize": 135168 00:04:42.700 } 00:04:42.700 } 00:04:42.700 ] 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "subsystem": "sock", 00:04:42.700 "config": [ 00:04:42.700 { 00:04:42.700 "method": "sock_set_default_impl", 00:04:42.700 "params": { 00:04:42.700 "impl_name": "posix" 00:04:42.700 } 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "method": "sock_impl_set_options", 00:04:42.700 "params": { 00:04:42.700 "impl_name": "ssl", 00:04:42.700 "recv_buf_size": 4096, 00:04:42.700 "send_buf_size": 4096, 00:04:42.700 "enable_recv_pipe": true, 00:04:42.700 "enable_quickack": false, 00:04:42.700 "enable_placement_id": 0, 00:04:42.700 "enable_zerocopy_send_server": true, 00:04:42.700 "enable_zerocopy_send_client": false, 00:04:42.700 "zerocopy_threshold": 0, 00:04:42.700 "tls_version": 0, 00:04:42.700 "enable_ktls": false 00:04:42.700 } 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "method": "sock_impl_set_options", 00:04:42.700 "params": { 00:04:42.700 "impl_name": "posix", 00:04:42.700 "recv_buf_size": 2097152, 00:04:42.700 "send_buf_size": 2097152, 00:04:42.700 "enable_recv_pipe": true, 00:04:42.700 "enable_quickack": false, 00:04:42.700 "enable_placement_id": 0, 00:04:42.700 "enable_zerocopy_send_server": true, 00:04:42.700 "enable_zerocopy_send_client": false, 00:04:42.700 "zerocopy_threshold": 0, 00:04:42.700 "tls_version": 0, 00:04:42.700 "enable_ktls": false 00:04:42.700 } 00:04:42.700 } 00:04:42.700 ] 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "subsystem": "vmd", 00:04:42.700 "config": [] 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "subsystem": "accel", 00:04:42.700 "config": [ 00:04:42.700 { 00:04:42.700 "method": "accel_set_options", 00:04:42.700 "params": { 00:04:42.700 "small_cache_size": 128, 00:04:42.700 "large_cache_size": 16, 00:04:42.700 "task_count": 2048, 00:04:42.700 "sequence_count": 2048, 00:04:42.700 "buf_count": 2048 00:04:42.700 } 00:04:42.700 } 00:04:42.700 ] 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "subsystem": "bdev", 00:04:42.700 "config": [ 00:04:42.700 { 00:04:42.700 "method": "bdev_set_options", 00:04:42.700 "params": { 00:04:42.700 "bdev_io_pool_size": 65535, 00:04:42.700 "bdev_io_cache_size": 256, 00:04:42.700 "bdev_auto_examine": true, 00:04:42.700 "iobuf_small_cache_size": 128, 00:04:42.700 "iobuf_large_cache_size": 16 00:04:42.700 } 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "method": "bdev_raid_set_options", 00:04:42.700 "params": { 00:04:42.700 "process_window_size_kb": 1024, 00:04:42.700 "process_max_bandwidth_mb_sec": 0 00:04:42.700 } 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "method": "bdev_iscsi_set_options", 00:04:42.700 "params": { 00:04:42.700 "timeout_sec": 30 00:04:42.700 } 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "method": "bdev_nvme_set_options", 00:04:42.700 "params": { 00:04:42.700 "action_on_timeout": "none", 00:04:42.700 "timeout_us": 0, 00:04:42.700 "timeout_admin_us": 0, 00:04:42.700 "keep_alive_timeout_ms": 10000, 00:04:42.700 "arbitration_burst": 0, 00:04:42.700 "low_priority_weight": 0, 00:04:42.700 "medium_priority_weight": 0, 00:04:42.700 "high_priority_weight": 0, 00:04:42.700 "nvme_adminq_poll_period_us": 10000, 00:04:42.700 "nvme_ioq_poll_period_us": 0, 00:04:42.700 "io_queue_requests": 0, 00:04:42.700 "delay_cmd_submit": true, 00:04:42.700 "transport_retry_count": 4, 00:04:42.700 "bdev_retry_count": 3, 00:04:42.700 "transport_ack_timeout": 0, 00:04:42.700 "ctrlr_loss_timeout_sec": 0, 00:04:42.700 "reconnect_delay_sec": 0, 00:04:42.700 "fast_io_fail_timeout_sec": 0, 00:04:42.700 "disable_auto_failback": false, 00:04:42.700 "generate_uuids": false, 00:04:42.700 "transport_tos": 0, 00:04:42.700 "nvme_error_stat": false, 00:04:42.700 "rdma_srq_size": 0, 00:04:42.700 "io_path_stat": false, 00:04:42.700 "allow_accel_sequence": false, 00:04:42.700 "rdma_max_cq_size": 0, 00:04:42.700 "rdma_cm_event_timeout_ms": 0, 00:04:42.700 "dhchap_digests": [ 00:04:42.700 "sha256", 00:04:42.700 "sha384", 00:04:42.700 "sha512" 00:04:42.700 ], 00:04:42.700 "dhchap_dhgroups": [ 00:04:42.700 "null", 00:04:42.700 "ffdhe2048", 00:04:42.700 "ffdhe3072", 00:04:42.700 "ffdhe4096", 00:04:42.700 "ffdhe6144", 00:04:42.700 "ffdhe8192" 00:04:42.700 ] 00:04:42.700 } 00:04:42.700 }, 00:04:42.700 { 00:04:42.700 "method": "bdev_nvme_set_hotplug", 00:04:42.700 "params": { 00:04:42.700 "period_us": 100000, 00:04:42.700 "enable": false 00:04:42.700 } 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "method": "bdev_wait_for_examine" 00:04:42.701 } 00:04:42.701 ] 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "scsi", 00:04:42.701 "config": null 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "scheduler", 00:04:42.701 "config": [ 00:04:42.701 { 00:04:42.701 "method": "framework_set_scheduler", 00:04:42.701 "params": { 00:04:42.701 "name": "static" 00:04:42.701 } 00:04:42.701 } 00:04:42.701 ] 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "vhost_scsi", 00:04:42.701 "config": [] 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "vhost_blk", 00:04:42.701 "config": [] 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "ublk", 00:04:42.701 "config": [] 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "nbd", 00:04:42.701 "config": [] 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "nvmf", 00:04:42.701 "config": [ 00:04:42.701 { 00:04:42.701 "method": "nvmf_set_config", 00:04:42.701 "params": { 00:04:42.701 "discovery_filter": "match_any", 00:04:42.701 "admin_cmd_passthru": { 00:04:42.701 "identify_ctrlr": false 00:04:42.701 } 00:04:42.701 } 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "method": "nvmf_set_max_subsystems", 00:04:42.701 "params": { 00:04:42.701 "max_subsystems": 1024 00:04:42.701 } 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "method": "nvmf_set_crdt", 00:04:42.701 "params": { 00:04:42.701 "crdt1": 0, 00:04:42.701 "crdt2": 0, 00:04:42.701 "crdt3": 0 00:04:42.701 } 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "method": "nvmf_create_transport", 00:04:42.701 "params": { 00:04:42.701 "trtype": "TCP", 00:04:42.701 "max_queue_depth": 128, 00:04:42.701 "max_io_qpairs_per_ctrlr": 127, 00:04:42.701 "in_capsule_data_size": 4096, 00:04:42.701 "max_io_size": 131072, 00:04:42.701 "io_unit_size": 131072, 00:04:42.701 "max_aq_depth": 128, 00:04:42.701 "num_shared_buffers": 511, 00:04:42.701 "buf_cache_size": 4294967295, 00:04:42.701 "dif_insert_or_strip": false, 00:04:42.701 "zcopy": false, 00:04:42.701 "c2h_success": true, 00:04:42.701 "sock_priority": 0, 00:04:42.701 "abort_timeout_sec": 1, 00:04:42.701 "ack_timeout": 0, 00:04:42.701 "data_wr_pool_size": 0 00:04:42.701 } 00:04:42.701 } 00:04:42.701 ] 00:04:42.701 }, 00:04:42.701 { 00:04:42.701 "subsystem": "iscsi", 00:04:42.701 "config": [ 00:04:42.701 { 00:04:42.701 "method": "iscsi_set_options", 00:04:42.701 "params": { 00:04:42.701 "node_base": "iqn.2016-06.io.spdk", 00:04:42.701 "max_sessions": 128, 00:04:42.701 "max_connections_per_session": 2, 00:04:42.701 "max_queue_depth": 64, 00:04:42.701 "default_time2wait": 2, 00:04:42.701 "default_time2retain": 20, 00:04:42.701 "first_burst_length": 8192, 00:04:42.701 "immediate_data": true, 00:04:42.701 "allow_duplicated_isid": false, 00:04:42.701 "error_recovery_level": 0, 00:04:42.701 "nop_timeout": 60, 00:04:42.701 "nop_in_interval": 30, 00:04:42.701 "disable_chap": false, 00:04:42.701 "require_chap": false, 00:04:42.701 "mutual_chap": false, 00:04:42.701 "chap_group": 0, 00:04:42.701 "max_large_datain_per_connection": 64, 00:04:42.701 "max_r2t_per_connection": 4, 00:04:42.701 "pdu_pool_size": 36864, 00:04:42.701 "immediate_data_pool_size": 16384, 00:04:42.701 "data_out_pool_size": 2048 00:04:42.701 } 00:04:42.701 } 00:04:42.701 ] 00:04:42.701 } 00:04:42.701 ] 00:04:42.701 } 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1523208 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1523208 ']' 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1523208 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523208 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523208' 00:04:42.701 killing process with pid 1523208 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1523208 00:04:42.701 18:56:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1523208 00:04:43.636 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1523478 00:04:43.636 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.636 18:56:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1523478 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1523478 ']' 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1523478 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1523478 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1523478' 00:04:48.932 killing process with pid 1523478 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1523478 00:04:48.932 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1523478 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.197 00:04:49.197 real 0m7.828s 00:04:49.197 user 0m7.424s 00:04:49.197 sys 0m1.183s 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.197 ************************************ 00:04:49.197 END TEST skip_rpc_with_json 00:04:49.197 ************************************ 00:04:49.197 18:56:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.197 18:56:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.197 18:56:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.197 18:56:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.197 ************************************ 00:04:49.197 START TEST skip_rpc_with_delay 00:04:49.197 ************************************ 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.197 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.456 [2024-07-24 18:56:54.898322] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.456 [2024-07-24 18:56:54.898578] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:49.456 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:49.456 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.456 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.456 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.456 00:04:49.456 real 0m0.144s 00:04:49.456 user 0m0.099s 00:04:49.456 sys 0m0.043s 00:04:49.456 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.456 18:56:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.456 ************************************ 00:04:49.456 END TEST skip_rpc_with_delay 00:04:49.456 ************************************ 00:04:49.456 18:56:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.456 18:56:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.456 18:56:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.456 18:56:54 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.456 18:56:54 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.456 18:56:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.456 ************************************ 00:04:49.456 START TEST exit_on_failed_rpc_init 00:04:49.456 ************************************ 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1524200 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1524200 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1524200 ']' 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.456 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.456 [2024-07-24 18:56:55.062907] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:04:49.456 [2024-07-24 18:56:55.063005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524200 ] 00:04:49.456 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.715 [2024-07-24 18:56:55.158032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.715 [2024-07-24 18:56:55.365763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:50.283 18:56:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.283 [2024-07-24 18:56:55.910756] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:04:50.283 [2024-07-24 18:56:55.910858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524329 ] 00:04:50.283 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.542 [2024-07-24 18:56:55.992105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.542 [2024-07-24 18:56:56.135189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.542 [2024-07-24 18:56:56.135338] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.542 [2024-07-24 18:56:56.135364] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.542 [2024-07-24 18:56:56.135381] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1524200 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1524200 ']' 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1524200 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1524200 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1524200' 00:04:50.800 killing process with pid 1524200 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1524200 00:04:50.800 18:56:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1524200 00:04:51.367 00:04:51.367 real 0m2.003s 00:04:51.367 user 0m2.276s 00:04:51.367 sys 0m0.703s 00:04:51.367 18:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.367 18:56:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.367 ************************************ 00:04:51.367 END TEST exit_on_failed_rpc_init 00:04:51.367 ************************************ 00:04:51.367 18:56:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.367 00:04:51.367 real 0m16.007s 00:04:51.367 user 0m15.114s 00:04:51.367 sys 0m2.650s 00:04:51.367 18:56:57 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.367 18:56:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.367 ************************************ 00:04:51.367 END TEST skip_rpc 00:04:51.367 ************************************ 00:04:51.626 18:56:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.626 18:56:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.626 18:56:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.626 18:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.626 ************************************ 00:04:51.626 START TEST rpc_client 00:04:51.626 ************************************ 00:04:51.626 18:56:57 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.626 * Looking for test storage... 00:04:51.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:51.626 18:56:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:51.626 OK 00:04:51.626 18:56:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.626 00:04:51.626 real 0m0.082s 00:04:51.626 user 0m0.040s 00:04:51.626 sys 0m0.047s 00:04:51.626 18:56:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.626 18:56:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:51.626 ************************************ 00:04:51.626 END TEST rpc_client 00:04:51.626 ************************************ 00:04:51.626 18:56:57 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.626 18:56:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.626 18:56:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.626 18:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.626 ************************************ 00:04:51.626 START TEST json_config 00:04:51.626 ************************************ 00:04:51.626 18:56:57 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.626 18:56:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.626 18:56:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.886 18:56:57 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.886 18:56:57 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.886 18:56:57 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.886 18:56:57 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.887 18:56:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.887 18:56:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.887 18:56:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.887 18:56:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:51.887 18:56:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@47 -- # : 0 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.887 18:56:57 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:51.887 INFO: JSON configuration test init 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.887 18:56:57 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.887 18:56:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.887 18:56:57 json_config -- json_config/common.sh@10 -- # shift 00:04:51.887 18:56:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.887 18:56:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.887 18:56:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.887 18:56:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.887 18:56:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.887 18:56:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1524584 00:04:51.887 18:56:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.887 Waiting for target to run... 00:04:51.887 18:56:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.887 18:56:57 json_config -- json_config/common.sh@25 -- # waitforlisten 1524584 /var/tmp/spdk_tgt.sock 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@831 -- # '[' -z 1524584 ']' 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.887 18:56:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.887 [2024-07-24 18:56:57.428468] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:04:51.887 [2024-07-24 18:56:57.428593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524584 ] 00:04:51.887 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.454 [2024-07-24 18:56:58.140917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.713 [2024-07-24 18:56:58.323271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.651 18:56:59 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:53.651 18:56:59 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:53.651 18:56:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:53.651 00:04:53.651 18:56:59 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:53.651 18:56:59 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:53.651 18:56:59 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.651 18:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.651 18:56:59 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:53.651 18:56:59 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:53.651 18:56:59 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:53.651 18:56:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:53.651 18:56:59 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:53.651 18:56:59 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:53.651 18:56:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:56.938 18:57:02 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:56.938 18:57:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:56.938 18:57:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.938 18:57:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.938 18:57:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:56.938 18:57:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:56.938 18:57:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:56.938 18:57:02 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:56.938 18:57:02 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:56.938 18:57:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@51 -- # sort 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:57.197 18:57:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.197 18:57:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:57.197 18:57:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.197 18:57:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:57.197 18:57:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.197 18:57:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:57.766 MallocForNvmf0 00:04:57.766 18:57:03 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:57.766 18:57:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:58.025 MallocForNvmf1 00:04:58.025 18:57:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:58.025 18:57:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:58.592 [2024-07-24 18:57:04.033170] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.592 18:57:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:58.592 18:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:59.158 18:57:04 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.158 18:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:59.158 18:57:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.158 18:57:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:59.726 18:57:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:59.726 18:57:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:00.298 [2024-07-24 18:57:05.948273] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:00.298 18:57:05 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:00.298 18:57:05 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.298 18:57:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.555 18:57:06 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:00.555 18:57:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.555 18:57:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.555 18:57:06 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:00.555 18:57:06 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:00.555 18:57:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.119 MallocBdevForConfigChangeCheck 00:05:01.120 18:57:06 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:01.120 18:57:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:01.120 18:57:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.120 18:57:06 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:01.120 18:57:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:01.685 18:57:07 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:01.685 INFO: shutting down applications... 00:05:01.685 18:57:07 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:01.685 18:57:07 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:01.685 18:57:07 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:01.685 18:57:07 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:03.590 Calling clear_iscsi_subsystem 00:05:03.590 Calling clear_nvmf_subsystem 00:05:03.590 Calling clear_nbd_subsystem 00:05:03.590 Calling clear_ublk_subsystem 00:05:03.590 Calling clear_vhost_blk_subsystem 00:05:03.590 Calling clear_vhost_scsi_subsystem 00:05:03.590 Calling clear_bdev_subsystem 00:05:03.590 18:57:08 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:03.590 18:57:08 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:03.590 18:57:08 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:03.590 18:57:08 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.590 18:57:08 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:03.590 18:57:08 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:03.849 18:57:09 json_config -- json_config/json_config.sh@349 -- # break 00:05:03.849 18:57:09 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:03.849 18:57:09 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:03.849 18:57:09 json_config -- json_config/common.sh@31 -- # local app=target 00:05:03.849 18:57:09 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.849 18:57:09 json_config -- json_config/common.sh@35 -- # [[ -n 1524584 ]] 00:05:03.849 18:57:09 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1524584 00:05:03.849 18:57:09 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.849 18:57:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.849 18:57:09 json_config -- json_config/common.sh@41 -- # kill -0 1524584 00:05:03.849 18:57:09 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.471 18:57:09 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.472 18:57:09 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.472 18:57:09 json_config -- json_config/common.sh@41 -- # kill -0 1524584 00:05:04.472 18:57:09 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:04.472 18:57:09 json_config -- json_config/common.sh@43 -- # break 00:05:04.472 18:57:09 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:04.472 18:57:09 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:04.472 SPDK target shutdown done 00:05:04.472 18:57:09 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:04.472 INFO: relaunching applications... 00:05:04.472 18:57:09 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.472 18:57:09 json_config -- json_config/common.sh@9 -- # local app=target 00:05:04.472 18:57:09 json_config -- json_config/common.sh@10 -- # shift 00:05:04.472 18:57:09 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:04.472 18:57:09 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:04.472 18:57:09 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:04.472 18:57:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.472 18:57:09 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:04.472 18:57:09 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1526662 00:05:04.472 18:57:09 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.472 18:57:09 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:04.472 Waiting for target to run... 00:05:04.472 18:57:09 json_config -- json_config/common.sh@25 -- # waitforlisten 1526662 /var/tmp/spdk_tgt.sock 00:05:04.472 18:57:09 json_config -- common/autotest_common.sh@831 -- # '[' -z 1526662 ']' 00:05:04.472 18:57:09 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:04.472 18:57:09 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.472 18:57:09 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:04.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:04.472 18:57:09 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.472 18:57:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.472 [2024-07-24 18:57:09.894665] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:04.472 [2024-07-24 18:57:09.894794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526662 ] 00:05:04.472 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.039 [2024-07-24 18:57:10.554400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.039 [2024-07-24 18:57:10.727661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.320 [2024-07-24 18:57:13.839218] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.320 [2024-07-24 18:57:13.871928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.254 18:57:14 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.254 18:57:14 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:09.254 18:57:14 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.254 00:05:09.254 18:57:14 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:09.254 18:57:14 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:09.254 INFO: Checking if target configuration is the same... 00:05:09.254 18:57:14 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.254 18:57:14 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:09.254 18:57:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.254 + '[' 2 -ne 2 ']' 00:05:09.254 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.254 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.254 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.254 +++ basename /dev/fd/62 00:05:09.254 ++ mktemp /tmp/62.XXX 00:05:09.254 + tmp_file_1=/tmp/62.cR9 00:05:09.254 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.254 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.254 + tmp_file_2=/tmp/spdk_tgt_config.json.8vh 00:05:09.254 + ret=0 00:05:09.254 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.821 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.821 + diff -u /tmp/62.cR9 /tmp/spdk_tgt_config.json.8vh 00:05:09.821 + echo 'INFO: JSON config files are the same' 00:05:09.821 INFO: JSON config files are the same 00:05:09.821 + rm /tmp/62.cR9 /tmp/spdk_tgt_config.json.8vh 00:05:09.821 + exit 0 00:05:09.821 18:57:15 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:09.821 18:57:15 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:09.821 INFO: changing configuration and checking if this can be detected... 00:05:09.821 18:57:15 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.821 18:57:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:10.388 18:57:15 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.388 18:57:15 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:10.388 18:57:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:10.388 + '[' 2 -ne 2 ']' 00:05:10.388 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:10.388 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:10.388 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:10.388 +++ basename /dev/fd/62 00:05:10.388 ++ mktemp /tmp/62.XXX 00:05:10.388 + tmp_file_1=/tmp/62.f4t 00:05:10.388 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:10.388 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:10.388 + tmp_file_2=/tmp/spdk_tgt_config.json.Yg3 00:05:10.388 + ret=0 00:05:10.388 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.645 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.903 + diff -u /tmp/62.f4t /tmp/spdk_tgt_config.json.Yg3 00:05:10.903 + ret=1 00:05:10.903 + echo '=== Start of file: /tmp/62.f4t ===' 00:05:10.903 + cat /tmp/62.f4t 00:05:10.903 + echo '=== End of file: /tmp/62.f4t ===' 00:05:10.903 + echo '' 00:05:10.903 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Yg3 ===' 00:05:10.903 + cat /tmp/spdk_tgt_config.json.Yg3 00:05:10.903 + echo '=== End of file: /tmp/spdk_tgt_config.json.Yg3 ===' 00:05:10.903 + echo '' 00:05:10.903 + rm /tmp/62.f4t /tmp/spdk_tgt_config.json.Yg3 00:05:10.903 + exit 1 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:10.903 INFO: configuration change detected. 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@321 -- # [[ -n 1526662 ]] 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.903 18:57:16 json_config -- json_config/json_config.sh@327 -- # killprocess 1526662 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@950 -- # '[' -z 1526662 ']' 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@954 -- # kill -0 1526662 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@955 -- # uname 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526662 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526662' 00:05:10.903 killing process with pid 1526662 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@969 -- # kill 1526662 00:05:10.903 18:57:16 json_config -- common/autotest_common.sh@974 -- # wait 1526662 00:05:12.808 18:57:18 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:12.808 18:57:18 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:12.808 18:57:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:12.808 18:57:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.808 18:57:18 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:12.808 18:57:18 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:12.808 INFO: Success 00:05:12.808 00:05:12.808 real 0m21.068s 00:05:12.808 user 0m26.005s 00:05:12.808 sys 0m3.321s 00:05:12.808 18:57:18 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.808 18:57:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.808 ************************************ 00:05:12.808 END TEST json_config 00:05:12.808 ************************************ 00:05:12.808 18:57:18 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:12.808 18:57:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.808 18:57:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.808 18:57:18 -- common/autotest_common.sh@10 -- # set +x 00:05:12.808 ************************************ 00:05:12.808 START TEST json_config_extra_key 00:05:12.808 ************************************ 00:05:12.808 18:57:18 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:12.808 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.808 18:57:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.808 18:57:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.808 18:57:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.809 18:57:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.809 18:57:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.809 18:57:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.809 18:57:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.809 18:57:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:12.809 18:57:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:12.809 18:57:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:12.809 INFO: launching applications... 00:05:12.809 18:57:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1527900 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.809 Waiting for target to run... 00:05:12.809 18:57:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1527900 /var/tmp/spdk_tgt.sock 00:05:12.809 18:57:18 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1527900 ']' 00:05:12.809 18:57:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.809 18:57:18 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.809 18:57:18 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.809 18:57:18 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.809 18:57:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.069 [2024-07-24 18:57:18.608608] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:13.069 [2024-07-24 18:57:18.608795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1527900 ] 00:05:13.069 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.638 [2024-07-24 18:57:19.262082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.898 [2024-07-24 18:57:19.419699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.156 18:57:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.156 18:57:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.156 00:05:14.156 18:57:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.156 INFO: shutting down applications... 00:05:14.156 18:57:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1527900 ]] 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1527900 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1527900 00:05:14.156 18:57:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.723 18:57:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.723 18:57:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.723 18:57:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1527900 00:05:14.723 18:57:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.290 18:57:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.290 18:57:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.290 18:57:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1527900 00:05:15.291 18:57:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.291 18:57:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:15.291 18:57:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.291 18:57:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.291 SPDK target shutdown done 00:05:15.291 18:57:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:15.291 Success 00:05:15.291 00:05:15.291 real 0m2.438s 00:05:15.291 user 0m2.101s 00:05:15.291 sys 0m0.784s 00:05:15.291 18:57:20 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.291 18:57:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:15.291 ************************************ 00:05:15.291 END TEST json_config_extra_key 00:05:15.291 ************************************ 00:05:15.291 18:57:20 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.291 18:57:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.291 18:57:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.291 18:57:20 -- common/autotest_common.sh@10 -- # set +x 00:05:15.291 ************************************ 00:05:15.291 START TEST alias_rpc 00:05:15.291 ************************************ 00:05:15.291 18:57:20 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:15.291 * Looking for test storage... 00:05:15.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:15.291 18:57:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:15.291 18:57:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1528272 00:05:15.291 18:57:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.291 18:57:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1528272 00:05:15.551 18:57:20 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1528272 ']' 00:05:15.551 18:57:20 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.551 18:57:20 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.551 18:57:20 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.551 18:57:20 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.551 18:57:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.551 [2024-07-24 18:57:21.042561] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:15.551 [2024-07-24 18:57:21.042664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528272 ] 00:05:15.551 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.551 [2024-07-24 18:57:21.141455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.811 [2024-07-24 18:57:21.347614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.379 18:57:21 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.379 18:57:21 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:16.379 18:57:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:16.945 18:57:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1528272 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1528272 ']' 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1528272 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528272 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528272' 00:05:16.945 killing process with pid 1528272 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@969 -- # kill 1528272 00:05:16.945 18:57:22 alias_rpc -- common/autotest_common.sh@974 -- # wait 1528272 00:05:17.514 00:05:17.514 real 0m2.156s 00:05:17.514 user 0m2.570s 00:05:17.514 sys 0m0.688s 00:05:17.514 18:57:23 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.514 18:57:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.514 ************************************ 00:05:17.514 END TEST alias_rpc 00:05:17.514 ************************************ 00:05:17.514 18:57:23 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:17.514 18:57:23 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.514 18:57:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.514 18:57:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.514 18:57:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.514 ************************************ 00:05:17.514 START TEST spdkcli_tcp 00:05:17.514 ************************************ 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:17.514 * Looking for test storage... 00:05:17.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1528557 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.514 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1528557 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1528557 ']' 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.514 18:57:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.779 [2024-07-24 18:57:23.254988] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:17.779 [2024-07-24 18:57:23.255096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528557 ] 00:05:17.779 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.779 [2024-07-24 18:57:23.353382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.037 [2024-07-24 18:57:23.554619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.037 [2024-07-24 18:57:23.554629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.295 18:57:23 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.295 18:57:23 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:18.295 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1528601 00:05:18.295 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.295 18:57:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.556 [ 00:05:18.556 "bdev_malloc_delete", 00:05:18.556 "bdev_malloc_create", 00:05:18.556 "bdev_null_resize", 00:05:18.556 "bdev_null_delete", 00:05:18.556 "bdev_null_create", 00:05:18.556 "bdev_nvme_cuse_unregister", 00:05:18.556 "bdev_nvme_cuse_register", 00:05:18.556 "bdev_opal_new_user", 00:05:18.556 "bdev_opal_set_lock_state", 00:05:18.556 "bdev_opal_delete", 00:05:18.556 "bdev_opal_get_info", 00:05:18.556 "bdev_opal_create", 00:05:18.556 "bdev_nvme_opal_revert", 00:05:18.556 "bdev_nvme_opal_init", 00:05:18.556 "bdev_nvme_send_cmd", 00:05:18.556 "bdev_nvme_get_path_iostat", 00:05:18.556 "bdev_nvme_get_mdns_discovery_info", 00:05:18.556 "bdev_nvme_stop_mdns_discovery", 00:05:18.556 "bdev_nvme_start_mdns_discovery", 00:05:18.556 "bdev_nvme_set_multipath_policy", 00:05:18.556 "bdev_nvme_set_preferred_path", 00:05:18.556 "bdev_nvme_get_io_paths", 00:05:18.556 "bdev_nvme_remove_error_injection", 00:05:18.556 "bdev_nvme_add_error_injection", 00:05:18.556 "bdev_nvme_get_discovery_info", 00:05:18.556 "bdev_nvme_stop_discovery", 00:05:18.556 "bdev_nvme_start_discovery", 00:05:18.556 "bdev_nvme_get_controller_health_info", 00:05:18.556 "bdev_nvme_disable_controller", 00:05:18.556 "bdev_nvme_enable_controller", 00:05:18.556 "bdev_nvme_reset_controller", 00:05:18.556 "bdev_nvme_get_transport_statistics", 00:05:18.556 "bdev_nvme_apply_firmware", 00:05:18.556 "bdev_nvme_detach_controller", 00:05:18.556 "bdev_nvme_get_controllers", 00:05:18.556 "bdev_nvme_attach_controller", 00:05:18.556 "bdev_nvme_set_hotplug", 00:05:18.556 "bdev_nvme_set_options", 00:05:18.556 "bdev_passthru_delete", 00:05:18.556 "bdev_passthru_create", 00:05:18.556 "bdev_lvol_set_parent_bdev", 00:05:18.556 "bdev_lvol_set_parent", 00:05:18.556 "bdev_lvol_check_shallow_copy", 00:05:18.556 "bdev_lvol_start_shallow_copy", 00:05:18.556 "bdev_lvol_grow_lvstore", 00:05:18.556 "bdev_lvol_get_lvols", 00:05:18.556 "bdev_lvol_get_lvstores", 00:05:18.556 "bdev_lvol_delete", 00:05:18.556 "bdev_lvol_set_read_only", 00:05:18.556 "bdev_lvol_resize", 00:05:18.556 "bdev_lvol_decouple_parent", 00:05:18.556 "bdev_lvol_inflate", 00:05:18.556 "bdev_lvol_rename", 00:05:18.556 "bdev_lvol_clone_bdev", 00:05:18.556 "bdev_lvol_clone", 00:05:18.556 "bdev_lvol_snapshot", 00:05:18.556 "bdev_lvol_create", 00:05:18.556 "bdev_lvol_delete_lvstore", 00:05:18.556 "bdev_lvol_rename_lvstore", 00:05:18.556 "bdev_lvol_create_lvstore", 00:05:18.556 "bdev_raid_set_options", 00:05:18.556 "bdev_raid_remove_base_bdev", 00:05:18.556 "bdev_raid_add_base_bdev", 00:05:18.556 "bdev_raid_delete", 00:05:18.556 "bdev_raid_create", 00:05:18.556 "bdev_raid_get_bdevs", 00:05:18.556 "bdev_error_inject_error", 00:05:18.556 "bdev_error_delete", 00:05:18.556 "bdev_error_create", 00:05:18.556 "bdev_split_delete", 00:05:18.556 "bdev_split_create", 00:05:18.556 "bdev_delay_delete", 00:05:18.556 "bdev_delay_create", 00:05:18.556 "bdev_delay_update_latency", 00:05:18.556 "bdev_zone_block_delete", 00:05:18.556 "bdev_zone_block_create", 00:05:18.556 "blobfs_create", 00:05:18.556 "blobfs_detect", 00:05:18.556 "blobfs_set_cache_size", 00:05:18.556 "bdev_aio_delete", 00:05:18.556 "bdev_aio_rescan", 00:05:18.556 "bdev_aio_create", 00:05:18.556 "bdev_ftl_set_property", 00:05:18.556 "bdev_ftl_get_properties", 00:05:18.556 "bdev_ftl_get_stats", 00:05:18.556 "bdev_ftl_unmap", 00:05:18.556 "bdev_ftl_unload", 00:05:18.556 "bdev_ftl_delete", 00:05:18.556 "bdev_ftl_load", 00:05:18.556 "bdev_ftl_create", 00:05:18.556 "bdev_virtio_attach_controller", 00:05:18.556 "bdev_virtio_scsi_get_devices", 00:05:18.556 "bdev_virtio_detach_controller", 00:05:18.556 "bdev_virtio_blk_set_hotplug", 00:05:18.556 "bdev_iscsi_delete", 00:05:18.556 "bdev_iscsi_create", 00:05:18.556 "bdev_iscsi_set_options", 00:05:18.556 "accel_error_inject_error", 00:05:18.556 "ioat_scan_accel_module", 00:05:18.556 "dsa_scan_accel_module", 00:05:18.556 "iaa_scan_accel_module", 00:05:18.556 "vfu_virtio_create_scsi_endpoint", 00:05:18.556 "vfu_virtio_scsi_remove_target", 00:05:18.556 "vfu_virtio_scsi_add_target", 00:05:18.556 "vfu_virtio_create_blk_endpoint", 00:05:18.556 "vfu_virtio_delete_endpoint", 00:05:18.556 "keyring_file_remove_key", 00:05:18.556 "keyring_file_add_key", 00:05:18.556 "keyring_linux_set_options", 00:05:18.556 "iscsi_get_histogram", 00:05:18.556 "iscsi_enable_histogram", 00:05:18.556 "iscsi_set_options", 00:05:18.556 "iscsi_get_auth_groups", 00:05:18.556 "iscsi_auth_group_remove_secret", 00:05:18.556 "iscsi_auth_group_add_secret", 00:05:18.556 "iscsi_delete_auth_group", 00:05:18.556 "iscsi_create_auth_group", 00:05:18.556 "iscsi_set_discovery_auth", 00:05:18.557 "iscsi_get_options", 00:05:18.557 "iscsi_target_node_request_logout", 00:05:18.557 "iscsi_target_node_set_redirect", 00:05:18.557 "iscsi_target_node_set_auth", 00:05:18.557 "iscsi_target_node_add_lun", 00:05:18.557 "iscsi_get_stats", 00:05:18.557 "iscsi_get_connections", 00:05:18.557 "iscsi_portal_group_set_auth", 00:05:18.557 "iscsi_start_portal_group", 00:05:18.557 "iscsi_delete_portal_group", 00:05:18.557 "iscsi_create_portal_group", 00:05:18.557 "iscsi_get_portal_groups", 00:05:18.557 "iscsi_delete_target_node", 00:05:18.557 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.557 "iscsi_target_node_add_pg_ig_maps", 00:05:18.557 "iscsi_create_target_node", 00:05:18.557 "iscsi_get_target_nodes", 00:05:18.557 "iscsi_delete_initiator_group", 00:05:18.557 "iscsi_initiator_group_remove_initiators", 00:05:18.557 "iscsi_initiator_group_add_initiators", 00:05:18.557 "iscsi_create_initiator_group", 00:05:18.557 "iscsi_get_initiator_groups", 00:05:18.557 "nvmf_set_crdt", 00:05:18.557 "nvmf_set_config", 00:05:18.557 "nvmf_set_max_subsystems", 00:05:18.557 "nvmf_stop_mdns_prr", 00:05:18.557 "nvmf_publish_mdns_prr", 00:05:18.557 "nvmf_subsystem_get_listeners", 00:05:18.557 "nvmf_subsystem_get_qpairs", 00:05:18.557 "nvmf_subsystem_get_controllers", 00:05:18.557 "nvmf_get_stats", 00:05:18.557 "nvmf_get_transports", 00:05:18.557 "nvmf_create_transport", 00:05:18.557 "nvmf_get_targets", 00:05:18.557 "nvmf_delete_target", 00:05:18.557 "nvmf_create_target", 00:05:18.557 "nvmf_subsystem_allow_any_host", 00:05:18.557 "nvmf_subsystem_remove_host", 00:05:18.557 "nvmf_subsystem_add_host", 00:05:18.557 "nvmf_ns_remove_host", 00:05:18.557 "nvmf_ns_add_host", 00:05:18.557 "nvmf_subsystem_remove_ns", 00:05:18.557 "nvmf_subsystem_add_ns", 00:05:18.557 "nvmf_subsystem_listener_set_ana_state", 00:05:18.557 "nvmf_discovery_get_referrals", 00:05:18.557 "nvmf_discovery_remove_referral", 00:05:18.557 "nvmf_discovery_add_referral", 00:05:18.557 "nvmf_subsystem_remove_listener", 00:05:18.557 "nvmf_subsystem_add_listener", 00:05:18.557 "nvmf_delete_subsystem", 00:05:18.557 "nvmf_create_subsystem", 00:05:18.557 "nvmf_get_subsystems", 00:05:18.557 "env_dpdk_get_mem_stats", 00:05:18.557 "nbd_get_disks", 00:05:18.557 "nbd_stop_disk", 00:05:18.557 "nbd_start_disk", 00:05:18.557 "ublk_recover_disk", 00:05:18.557 "ublk_get_disks", 00:05:18.557 "ublk_stop_disk", 00:05:18.557 "ublk_start_disk", 00:05:18.557 "ublk_destroy_target", 00:05:18.557 "ublk_create_target", 00:05:18.557 "virtio_blk_create_transport", 00:05:18.557 "virtio_blk_get_transports", 00:05:18.557 "vhost_controller_set_coalescing", 00:05:18.557 "vhost_get_controllers", 00:05:18.557 "vhost_delete_controller", 00:05:18.557 "vhost_create_blk_controller", 00:05:18.557 "vhost_scsi_controller_remove_target", 00:05:18.557 "vhost_scsi_controller_add_target", 00:05:18.557 "vhost_start_scsi_controller", 00:05:18.557 "vhost_create_scsi_controller", 00:05:18.557 "thread_set_cpumask", 00:05:18.557 "framework_get_governor", 00:05:18.557 "framework_get_scheduler", 00:05:18.557 "framework_set_scheduler", 00:05:18.557 "framework_get_reactors", 00:05:18.557 "thread_get_io_channels", 00:05:18.557 "thread_get_pollers", 00:05:18.557 "thread_get_stats", 00:05:18.557 "framework_monitor_context_switch", 00:05:18.557 "spdk_kill_instance", 00:05:18.557 "log_enable_timestamps", 00:05:18.557 "log_get_flags", 00:05:18.557 "log_clear_flag", 00:05:18.557 "log_set_flag", 00:05:18.557 "log_get_level", 00:05:18.557 "log_set_level", 00:05:18.557 "log_get_print_level", 00:05:18.557 "log_set_print_level", 00:05:18.557 "framework_enable_cpumask_locks", 00:05:18.557 "framework_disable_cpumask_locks", 00:05:18.557 "framework_wait_init", 00:05:18.557 "framework_start_init", 00:05:18.557 "scsi_get_devices", 00:05:18.557 "bdev_get_histogram", 00:05:18.557 "bdev_enable_histogram", 00:05:18.557 "bdev_set_qos_limit", 00:05:18.557 "bdev_set_qd_sampling_period", 00:05:18.557 "bdev_get_bdevs", 00:05:18.557 "bdev_reset_iostat", 00:05:18.557 "bdev_get_iostat", 00:05:18.557 "bdev_examine", 00:05:18.557 "bdev_wait_for_examine", 00:05:18.557 "bdev_set_options", 00:05:18.557 "notify_get_notifications", 00:05:18.557 "notify_get_types", 00:05:18.557 "accel_get_stats", 00:05:18.557 "accel_set_options", 00:05:18.557 "accel_set_driver", 00:05:18.557 "accel_crypto_key_destroy", 00:05:18.557 "accel_crypto_keys_get", 00:05:18.557 "accel_crypto_key_create", 00:05:18.557 "accel_assign_opc", 00:05:18.557 "accel_get_module_info", 00:05:18.557 "accel_get_opc_assignments", 00:05:18.557 "vmd_rescan", 00:05:18.557 "vmd_remove_device", 00:05:18.557 "vmd_enable", 00:05:18.557 "sock_get_default_impl", 00:05:18.557 "sock_set_default_impl", 00:05:18.557 "sock_impl_set_options", 00:05:18.557 "sock_impl_get_options", 00:05:18.557 "iobuf_get_stats", 00:05:18.557 "iobuf_set_options", 00:05:18.557 "keyring_get_keys", 00:05:18.557 "framework_get_pci_devices", 00:05:18.557 "framework_get_config", 00:05:18.557 "framework_get_subsystems", 00:05:18.557 "vfu_tgt_set_base_path", 00:05:18.557 "trace_get_info", 00:05:18.557 "trace_get_tpoint_group_mask", 00:05:18.557 "trace_disable_tpoint_group", 00:05:18.557 "trace_enable_tpoint_group", 00:05:18.557 "trace_clear_tpoint_mask", 00:05:18.557 "trace_set_tpoint_mask", 00:05:18.557 "spdk_get_version", 00:05:18.557 "rpc_get_methods" 00:05:18.557 ] 00:05:18.558 18:57:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.558 18:57:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.558 18:57:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1528557 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1528557 ']' 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1528557 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528557 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528557' 00:05:18.558 killing process with pid 1528557 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1528557 00:05:18.558 18:57:24 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1528557 00:05:19.503 00:05:19.503 real 0m1.711s 00:05:19.503 user 0m2.990s 00:05:19.503 sys 0m0.608s 00:05:19.503 18:57:24 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.503 18:57:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.503 ************************************ 00:05:19.503 END TEST spdkcli_tcp 00:05:19.503 ************************************ 00:05:19.503 18:57:24 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.503 18:57:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.503 18:57:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.503 18:57:24 -- common/autotest_common.sh@10 -- # set +x 00:05:19.503 ************************************ 00:05:19.503 START TEST dpdk_mem_utility 00:05:19.503 ************************************ 00:05:19.503 18:57:24 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.503 * Looking for test storage... 00:05:19.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:19.503 18:57:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:19.503 18:57:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1528798 00:05:19.503 18:57:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:19.503 18:57:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1528798 00:05:19.503 18:57:24 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1528798 ']' 00:05:19.503 18:57:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.503 18:57:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.503 18:57:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.503 18:57:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.503 18:57:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:19.503 [2024-07-24 18:57:25.099782] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:19.503 [2024-07-24 18:57:25.099950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528798 ] 00:05:19.503 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.762 [2024-07-24 18:57:25.230867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.762 [2024-07-24 18:57:25.431577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.331 18:57:25 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.331 18:57:25 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:20.331 18:57:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:20.331 18:57:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:20.331 18:57:25 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.331 18:57:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.331 { 00:05:20.331 "filename": "/tmp/spdk_mem_dump.txt" 00:05:20.331 } 00:05:20.331 18:57:25 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.331 18:57:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.331 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:20.331 1 heaps totaling size 814.000000 MiB 00:05:20.331 size: 814.000000 MiB heap id: 0 00:05:20.331 end heaps---------- 00:05:20.331 8 mempools totaling size 598.116089 MiB 00:05:20.331 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:20.331 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:20.331 size: 84.521057 MiB name: bdev_io_1528798 00:05:20.331 size: 51.011292 MiB name: evtpool_1528798 00:05:20.331 size: 50.003479 MiB name: msgpool_1528798 00:05:20.331 size: 21.763794 MiB name: PDU_Pool 00:05:20.331 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:20.331 size: 0.026123 MiB name: Session_Pool 00:05:20.331 end mempools------- 00:05:20.331 6 memzones totaling size 4.142822 MiB 00:05:20.331 size: 1.000366 MiB name: RG_ring_0_1528798 00:05:20.331 size: 1.000366 MiB name: RG_ring_1_1528798 00:05:20.331 size: 1.000366 MiB name: RG_ring_4_1528798 00:05:20.331 size: 1.000366 MiB name: RG_ring_5_1528798 00:05:20.331 size: 0.125366 MiB name: RG_ring_2_1528798 00:05:20.331 size: 0.015991 MiB name: RG_ring_3_1528798 00:05:20.331 end memzones------- 00:05:20.331 18:57:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:20.590 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:20.590 list of free elements. size: 12.519348 MiB 00:05:20.590 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:20.590 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:20.590 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:20.590 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:20.590 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:20.590 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:20.590 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:20.590 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:20.590 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:20.590 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:20.590 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:20.590 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:20.591 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:20.591 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:20.591 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:20.591 list of standard malloc elements. size: 199.218079 MiB 00:05:20.591 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:20.591 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:20.591 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:20.591 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:20.591 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:20.591 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:20.591 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:20.591 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:20.591 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:20.591 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:20.591 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:20.591 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:20.591 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:20.591 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:20.591 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:20.591 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:20.591 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:20.591 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:20.591 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:20.591 list of memzone associated elements. size: 602.262573 MiB 00:05:20.591 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:20.591 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:20.591 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:20.591 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:20.591 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:20.591 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1528798_0 00:05:20.591 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:20.591 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1528798_0 00:05:20.591 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:20.591 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1528798_0 00:05:20.591 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:20.591 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:20.591 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:20.591 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:20.591 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:20.591 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1528798 00:05:20.591 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:20.591 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1528798 00:05:20.591 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:20.591 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1528798 00:05:20.591 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:20.591 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:20.591 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:20.591 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:20.591 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:20.591 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:20.591 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:20.591 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:20.591 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:20.591 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1528798 00:05:20.591 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:20.591 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1528798 00:05:20.591 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:20.591 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1528798 00:05:20.591 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:20.591 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1528798 00:05:20.591 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:20.591 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1528798 00:05:20.591 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:20.591 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:20.591 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:20.591 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:20.591 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:20.591 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:20.591 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:20.591 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1528798 00:05:20.591 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:20.591 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:20.591 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:20.591 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:20.591 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:20.591 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1528798 00:05:20.591 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:20.591 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:20.591 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:20.591 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1528798 00:05:20.591 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:20.591 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1528798 00:05:20.591 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:20.591 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:20.591 18:57:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:20.591 18:57:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1528798 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1528798 ']' 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1528798 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528798 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528798' 00:05:20.591 killing process with pid 1528798 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1528798 00:05:20.591 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1528798 00:05:21.158 00:05:21.158 real 0m1.821s 00:05:21.158 user 0m1.832s 00:05:21.158 sys 0m0.694s 00:05:21.158 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.158 18:57:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.158 ************************************ 00:05:21.158 END TEST dpdk_mem_utility 00:05:21.158 ************************************ 00:05:21.158 18:57:26 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.158 18:57:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.158 18:57:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.158 18:57:26 -- common/autotest_common.sh@10 -- # set +x 00:05:21.158 ************************************ 00:05:21.158 START TEST event 00:05:21.158 ************************************ 00:05:21.158 18:57:26 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:21.158 * Looking for test storage... 00:05:21.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:21.158 18:57:26 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:21.158 18:57:26 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.158 18:57:26 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.158 18:57:26 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:21.158 18:57:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.158 18:57:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:21.417 ************************************ 00:05:21.417 START TEST event_perf 00:05:21.417 ************************************ 00:05:21.417 18:57:26 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.417 Running I/O for 1 seconds...[2024-07-24 18:57:26.908175] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:21.417 [2024-07-24 18:57:26.908319] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529114 ] 00:05:21.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.417 [2024-07-24 18:57:27.034878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.676 [2024-07-24 18:57:27.243649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.676 [2024-07-24 18:57:27.243713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.676 [2024-07-24 18:57:27.243777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.676 [2024-07-24 18:57:27.243782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.051 Running I/O for 1 seconds... 00:05:23.051 lcore 0: 168853 00:05:23.051 lcore 1: 168852 00:05:23.051 lcore 2: 168851 00:05:23.051 lcore 3: 168853 00:05:23.051 done. 00:05:23.051 00:05:23.051 real 0m1.554s 00:05:23.051 user 0m4.396s 00:05:23.051 sys 0m0.146s 00:05:23.051 18:57:28 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.051 18:57:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.051 ************************************ 00:05:23.051 END TEST event_perf 00:05:23.051 ************************************ 00:05:23.051 18:57:28 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:23.051 18:57:28 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:23.051 18:57:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.051 18:57:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.051 ************************************ 00:05:23.051 START TEST event_reactor 00:05:23.051 ************************************ 00:05:23.051 18:57:28 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:23.051 [2024-07-24 18:57:28.533276] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:23.051 [2024-07-24 18:57:28.533415] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529276 ] 00:05:23.051 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.051 [2024-07-24 18:57:28.669163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.310 [2024-07-24 18:57:28.877514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.698 test_start 00:05:24.698 oneshot 00:05:24.698 tick 100 00:05:24.698 tick 100 00:05:24.698 tick 250 00:05:24.698 tick 100 00:05:24.698 tick 100 00:05:24.698 tick 100 00:05:24.698 tick 250 00:05:24.698 tick 500 00:05:24.698 tick 100 00:05:24.698 tick 100 00:05:24.698 tick 250 00:05:24.698 tick 100 00:05:24.698 tick 100 00:05:24.698 test_end 00:05:24.698 00:05:24.698 real 0m1.558s 00:05:24.698 user 0m1.394s 00:05:24.698 sys 0m0.153s 00:05:24.698 18:57:30 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.698 18:57:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:24.698 ************************************ 00:05:24.698 END TEST event_reactor 00:05:24.698 ************************************ 00:05:24.698 18:57:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.698 18:57:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:24.698 18:57:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.698 18:57:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.698 ************************************ 00:05:24.698 START TEST event_reactor_perf 00:05:24.698 ************************************ 00:05:24.698 18:57:30 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.698 [2024-07-24 18:57:30.141519] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:24.698 [2024-07-24 18:57:30.141587] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529449 ] 00:05:24.698 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.698 [2024-07-24 18:57:30.256380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.957 [2024-07-24 18:57:30.468840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.334 test_start 00:05:26.334 test_end 00:05:26.334 Performance: 175307 events per second 00:05:26.334 00:05:26.334 real 0m1.544s 00:05:26.334 user 0m1.394s 00:05:26.334 sys 0m0.138s 00:05:26.334 18:57:31 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.334 18:57:31 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.334 ************************************ 00:05:26.334 END TEST event_reactor_perf 00:05:26.334 ************************************ 00:05:26.334 18:57:31 event -- event/event.sh@49 -- # uname -s 00:05:26.334 18:57:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:26.334 18:57:31 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:26.334 18:57:31 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.334 18:57:31 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.334 18:57:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.334 ************************************ 00:05:26.334 START TEST event_scheduler 00:05:26.334 ************************************ 00:05:26.334 18:57:31 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:26.334 * Looking for test storage... 00:05:26.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:26.334 18:57:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:26.334 18:57:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1529740 00:05:26.334 18:57:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:26.334 18:57:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.334 18:57:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1529740 00:05:26.334 18:57:31 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1529740 ']' 00:05:26.334 18:57:31 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.334 18:57:31 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.334 18:57:31 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.334 18:57:31 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.334 18:57:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.334 [2024-07-24 18:57:31.880558] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:26.334 [2024-07-24 18:57:31.880664] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529740 ] 00:05:26.334 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.334 [2024-07-24 18:57:31.990320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.603 [2024-07-24 18:57:32.221316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.603 [2024-07-24 18:57:32.221474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.603 [2024-07-24 18:57:32.221418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.603 [2024-07-24 18:57:32.221480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:26.862 18:57:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.862 [2024-07-24 18:57:32.367123] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:26.862 [2024-07-24 18:57:32.367166] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:26.862 [2024-07-24 18:57:32.367190] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:26.862 [2024-07-24 18:57:32.367206] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:26.862 [2024-07-24 18:57:32.367221] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.862 18:57:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.862 [2024-07-24 18:57:32.548965] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.862 18:57:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.862 18:57:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.121 ************************************ 00:05:27.121 START TEST scheduler_create_thread 00:05:27.121 ************************************ 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.121 2 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.121 3 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.121 4 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.121 5 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.121 6 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.121 7 00:05:27.121 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.122 8 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.122 9 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.122 10 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.122 18:57:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.690 18:57:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.690 00:05:27.690 real 0m0.597s 00:05:27.690 user 0m0.016s 00:05:27.690 sys 0m0.002s 00:05:27.690 18:57:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.690 18:57:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.690 ************************************ 00:05:27.690 END TEST scheduler_create_thread 00:05:27.690 ************************************ 00:05:27.690 18:57:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:27.690 18:57:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1529740 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1529740 ']' 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1529740 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529740 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529740' 00:05:27.690 killing process with pid 1529740 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1529740 00:05:27.690 18:57:33 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1529740 00:05:28.257 [2024-07-24 18:57:33.658342] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:28.532 00:05:28.532 real 0m2.314s 00:05:28.532 user 0m3.230s 00:05:28.532 sys 0m0.545s 00:05:28.532 18:57:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.532 18:57:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:28.532 ************************************ 00:05:28.532 END TEST event_scheduler 00:05:28.532 ************************************ 00:05:28.532 18:57:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:28.532 18:57:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:28.532 18:57:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.532 18:57:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.532 18:57:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.532 ************************************ 00:05:28.532 START TEST app_repeat 00:05:28.532 ************************************ 00:05:28.532 18:57:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1530054 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1530054' 00:05:28.532 Process app_repeat pid: 1530054 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:28.532 spdk_app_start Round 0 00:05:28.532 18:57:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1530054 /var/tmp/spdk-nbd.sock 00:05:28.532 18:57:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1530054 ']' 00:05:28.532 18:57:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.532 18:57:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.532 18:57:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.532 18:57:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.532 18:57:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.532 [2024-07-24 18:57:34.151455] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:28.532 [2024-07-24 18:57:34.151536] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530054 ] 00:05:28.532 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.791 [2024-07-24 18:57:34.255114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.791 [2024-07-24 18:57:34.467689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.791 [2024-07-24 18:57:34.467694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.725 18:57:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.725 18:57:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:29.725 18:57:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.997 Malloc0 00:05:29.997 18:57:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.255 Malloc1 00:05:30.513 18:57:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.513 18:57:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.771 /dev/nbd0 00:05:30.771 18:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.771 18:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.771 1+0 records in 00:05:30.771 1+0 records out 00:05:30.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170896 s, 24.0 MB/s 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:30.771 18:57:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:30.771 18:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.771 18:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.771 18:57:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.336 /dev/nbd1 00:05:31.336 18:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.336 18:57:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:31.336 18:57:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.336 1+0 records in 00:05:31.336 1+0 records out 00:05:31.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235098 s, 17.4 MB/s 00:05:31.337 18:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.337 18:57:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:31.337 18:57:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.337 18:57:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:31.337 18:57:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:31.337 18:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.337 18:57:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.337 18:57:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.337 18:57:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.337 18:57:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.595 { 00:05:31.595 "nbd_device": "/dev/nbd0", 00:05:31.595 "bdev_name": "Malloc0" 00:05:31.595 }, 00:05:31.595 { 00:05:31.595 "nbd_device": "/dev/nbd1", 00:05:31.595 "bdev_name": "Malloc1" 00:05:31.595 } 00:05:31.595 ]' 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.595 { 00:05:31.595 "nbd_device": "/dev/nbd0", 00:05:31.595 "bdev_name": "Malloc0" 00:05:31.595 }, 00:05:31.595 { 00:05:31.595 "nbd_device": "/dev/nbd1", 00:05:31.595 "bdev_name": "Malloc1" 00:05:31.595 } 00:05:31.595 ]' 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.595 /dev/nbd1' 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.595 /dev/nbd1' 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.595 256+0 records in 00:05:31.595 256+0 records out 00:05:31.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00747803 s, 140 MB/s 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.595 18:57:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.853 256+0 records in 00:05:31.853 256+0 records out 00:05:31.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290318 s, 36.1 MB/s 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.853 256+0 records in 00:05:31.853 256+0 records out 00:05:31.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331129 s, 31.7 MB/s 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.853 18:57:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.111 18:57:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.369 18:57:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.369 18:57:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.369 18:57:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.369 18:57:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.369 18:57:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.369 18:57:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.627 18:57:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:32.627 18:57:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.627 18:57:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.627 18:57:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.627 18:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.889 18:57:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.889 18:57:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.496 18:57:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.064 [2024-07-24 18:57:39.553951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.323 [2024-07-24 18:57:39.763290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.323 [2024-07-24 18:57:39.763290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.323 [2024-07-24 18:57:39.833964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.323 [2024-07-24 18:57:39.834053] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.851 18:57:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:36.851 18:57:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:36.851 spdk_app_start Round 1 00:05:36.851 18:57:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1530054 /var/tmp/spdk-nbd.sock 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1530054 ']' 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.851 18:57:42 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:36.851 18:57:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.109 Malloc0 00:05:37.109 18:57:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.675 Malloc1 00:05:37.675 18:57:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.675 18:57:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.933 /dev/nbd0 00:05:37.933 18:57:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.933 18:57:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:37.933 1+0 records in 00:05:37.933 1+0 records out 00:05:37.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211793 s, 19.3 MB/s 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:37.933 18:57:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:37.933 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:37.933 18:57:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.933 18:57:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.499 /dev/nbd1 00:05:38.499 18:57:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.499 18:57:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.499 1+0 records in 00:05:38.499 1+0 records out 00:05:38.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230895 s, 17.7 MB/s 00:05:38.499 18:57:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.500 18:57:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.500 18:57:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.500 18:57:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.500 18:57:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.500 18:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.500 18:57:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.500 18:57:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.500 18:57:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.500 18:57:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.065 18:57:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.065 { 00:05:39.065 "nbd_device": "/dev/nbd0", 00:05:39.065 "bdev_name": "Malloc0" 00:05:39.065 }, 00:05:39.065 { 00:05:39.065 "nbd_device": "/dev/nbd1", 00:05:39.065 "bdev_name": "Malloc1" 00:05:39.065 } 00:05:39.065 ]' 00:05:39.065 18:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.065 { 00:05:39.065 "nbd_device": "/dev/nbd0", 00:05:39.065 "bdev_name": "Malloc0" 00:05:39.065 }, 00:05:39.065 { 00:05:39.065 "nbd_device": "/dev/nbd1", 00:05:39.065 "bdev_name": "Malloc1" 00:05:39.065 } 00:05:39.065 ]' 00:05:39.065 18:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.065 18:57:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.065 /dev/nbd1' 00:05:39.065 18:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.065 /dev/nbd1' 00:05:39.065 18:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.065 18:57:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.066 256+0 records in 00:05:39.066 256+0 records out 00:05:39.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00564238 s, 186 MB/s 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.066 256+0 records in 00:05:39.066 256+0 records out 00:05:39.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299477 s, 35.0 MB/s 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.066 256+0 records in 00:05:39.066 256+0 records out 00:05:39.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0318458 s, 32.9 MB/s 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.066 18:57:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.632 18:57:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.199 18:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.199 18:57:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.199 18:57:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.199 18:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.199 18:57:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.199 18:57:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.456 18:57:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.456 18:57:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.456 18:57:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.456 18:57:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.456 18:57:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.021 18:57:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.022 18:57:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.279 18:57:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.846 [2024-07-24 18:57:47.282513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.846 [2024-07-24 18:57:47.486475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.846 [2024-07-24 18:57:47.486482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.104 [2024-07-24 18:57:47.557769] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.104 [2024-07-24 18:57:47.557849] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.632 18:57:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.632 18:57:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:44.632 spdk_app_start Round 2 00:05:44.632 18:57:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1530054 /var/tmp/spdk-nbd.sock 00:05:44.632 18:57:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1530054 ']' 00:05:44.632 18:57:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.632 18:57:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.632 18:57:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.632 18:57:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.632 18:57:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.632 18:57:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.632 18:57:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:44.632 18:57:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.891 Malloc0 00:05:44.891 18:57:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.456 Malloc1 00:05:45.456 18:57:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.456 18:57:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.713 /dev/nbd0 00:05:45.713 18:57:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.713 18:57:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.713 18:57:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:45.713 18:57:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.713 18:57:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.713 18:57:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.713 18:57:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:45.713 18:57:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.714 1+0 records in 00:05:45.714 1+0 records out 00:05:45.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213539 s, 19.2 MB/s 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.714 18:57:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.714 18:57:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.714 18:57:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.714 18:57:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.972 /dev/nbd1 00:05:45.972 18:57:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.972 18:57:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.972 1+0 records in 00:05:45.972 1+0 records out 00:05:45.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023319 s, 17.6 MB/s 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:45.972 18:57:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:45.972 18:57:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.972 18:57:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.972 18:57:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.972 18:57:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.972 18:57:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.538 { 00:05:46.538 "nbd_device": "/dev/nbd0", 00:05:46.538 "bdev_name": "Malloc0" 00:05:46.538 }, 00:05:46.538 { 00:05:46.538 "nbd_device": "/dev/nbd1", 00:05:46.538 "bdev_name": "Malloc1" 00:05:46.538 } 00:05:46.538 ]' 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.538 { 00:05:46.538 "nbd_device": "/dev/nbd0", 00:05:46.538 "bdev_name": "Malloc0" 00:05:46.538 }, 00:05:46.538 { 00:05:46.538 "nbd_device": "/dev/nbd1", 00:05:46.538 "bdev_name": "Malloc1" 00:05:46.538 } 00:05:46.538 ]' 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.538 /dev/nbd1' 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.538 /dev/nbd1' 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.538 256+0 records in 00:05:46.538 256+0 records out 00:05:46.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529636 s, 198 MB/s 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.538 256+0 records in 00:05:46.538 256+0 records out 00:05:46.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299758 s, 35.0 MB/s 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.538 18:57:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.796 256+0 records in 00:05:46.796 256+0 records out 00:05:46.796 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322115 s, 32.6 MB/s 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.796 18:57:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.363 18:57:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.638 18:57:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.895 18:57:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.895 18:57:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.479 18:57:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.745 [2024-07-24 18:57:54.287493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.005 [2024-07-24 18:57:54.485833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.005 [2024-07-24 18:57:54.485839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.005 [2024-07-24 18:57:54.556856] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.005 [2024-07-24 18:57:54.556948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.532 18:57:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1530054 /var/tmp/spdk-nbd.sock 00:05:51.532 18:57:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1530054 ']' 00:05:51.532 18:57:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.532 18:57:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.532 18:57:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.532 18:57:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.532 18:57:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.532 18:57:57 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.532 18:57:57 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.532 18:57:57 event.app_repeat -- event/event.sh@39 -- # killprocess 1530054 00:05:51.532 18:57:57 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1530054 ']' 00:05:51.532 18:57:57 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1530054 00:05:51.532 18:57:57 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:51.532 18:57:57 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.532 18:57:57 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1530054 00:05:51.790 18:57:57 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.790 18:57:57 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.790 18:57:57 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1530054' 00:05:51.790 killing process with pid 1530054 00:05:51.790 18:57:57 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1530054 00:05:51.790 18:57:57 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1530054 00:05:52.049 spdk_app_start is called in Round 0. 00:05:52.049 Shutdown signal received, stop current app iteration 00:05:52.049 Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 reinitialization... 00:05:52.049 spdk_app_start is called in Round 1. 00:05:52.049 Shutdown signal received, stop current app iteration 00:05:52.049 Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 reinitialization... 00:05:52.049 spdk_app_start is called in Round 2. 00:05:52.049 Shutdown signal received, stop current app iteration 00:05:52.049 Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 reinitialization... 00:05:52.049 spdk_app_start is called in Round 3. 00:05:52.049 Shutdown signal received, stop current app iteration 00:05:52.049 18:57:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:52.049 18:57:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:52.049 00:05:52.049 real 0m23.474s 00:05:52.049 user 0m52.045s 00:05:52.049 sys 0m5.121s 00:05:52.049 18:57:57 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.049 18:57:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.049 ************************************ 00:05:52.049 END TEST app_repeat 00:05:52.049 ************************************ 00:05:52.049 18:57:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:52.049 18:57:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:52.049 18:57:57 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.049 18:57:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.049 18:57:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.049 ************************************ 00:05:52.049 START TEST cpu_locks 00:05:52.049 ************************************ 00:05:52.049 18:57:57 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:52.049 * Looking for test storage... 00:05:52.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:52.049 18:57:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:52.049 18:57:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:52.049 18:57:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:52.049 18:57:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:52.049 18:57:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.049 18:57:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.049 18:57:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.308 ************************************ 00:05:52.308 START TEST default_locks 00:05:52.308 ************************************ 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1532944 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1532944 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1532944 ']' 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.308 18:57:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.308 [2024-07-24 18:57:57.862815] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:52.308 [2024-07-24 18:57:57.862997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532944 ] 00:05:52.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.308 [2024-07-24 18:57:57.997453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.567 [2024-07-24 18:57:58.198512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.135 18:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.135 18:57:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:53.135 18:57:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1532944 00:05:53.135 18:57:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1532944 00:05:53.135 18:57:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.702 lslocks: write error 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1532944 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1532944 ']' 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1532944 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1532944 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1532944' 00:05:53.702 killing process with pid 1532944 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1532944 00:05:53.702 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1532944 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1532944 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1532944 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1532944 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1532944 ']' 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1532944) - No such process 00:05:54.638 ERROR: process (pid: 1532944) is no longer running 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:54.638 00:05:54.638 real 0m2.236s 00:05:54.638 user 0m2.283s 00:05:54.638 sys 0m0.955s 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.638 18:57:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.638 ************************************ 00:05:54.638 END TEST default_locks 00:05:54.638 ************************************ 00:05:54.638 18:58:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:54.638 18:58:00 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.638 18:58:00 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.638 18:58:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.638 ************************************ 00:05:54.638 START TEST default_locks_via_rpc 00:05:54.638 ************************************ 00:05:54.638 18:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:54.638 18:58:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1533238 00:05:54.638 18:58:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.638 18:58:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1533238 00:05:54.638 18:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1533238 ']' 00:05:54.638 18:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.639 18:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.639 18:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.639 18:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.639 18:58:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.639 [2024-07-24 18:58:00.118176] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:54.639 [2024-07-24 18:58:00.118290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533238 ] 00:05:54.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.639 [2024-07-24 18:58:00.223541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.897 [2024-07-24 18:58:00.425646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1533238 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1533238 00:05:55.833 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1533238 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1533238 ']' 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1533238 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1533238 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1533238' 00:05:56.091 killing process with pid 1533238 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1533238 00:05:56.091 18:58:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1533238 00:05:56.658 00:05:56.658 real 0m2.220s 00:05:56.658 user 0m2.278s 00:05:56.658 sys 0m0.834s 00:05:56.658 18:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.658 18:58:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.658 ************************************ 00:05:56.658 END TEST default_locks_via_rpc 00:05:56.658 ************************************ 00:05:56.658 18:58:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.658 18:58:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.658 18:58:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.658 18:58:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.658 ************************************ 00:05:56.658 START TEST non_locking_app_on_locked_coremask 00:05:56.658 ************************************ 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1533536 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1533536 /var/tmp/spdk.sock 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1533536 ']' 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.658 18:58:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.918 [2024-07-24 18:58:02.412749] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:56.918 [2024-07-24 18:58:02.412867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533536 ] 00:05:56.918 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.918 [2024-07-24 18:58:02.517401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.176 [2024-07-24 18:58:02.719417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1533664 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1533664 /var/tmp/spdk2.sock 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1533664 ']' 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.743 18:58:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.743 [2024-07-24 18:58:03.252744] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:05:57.743 [2024-07-24 18:58:03.252916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533664 ] 00:05:57.743 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.743 [2024-07-24 18:58:03.437270] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.743 [2024-07-24 18:58:03.437308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.311 [2024-07-24 18:58:03.838691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.248 18:58:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.248 18:58:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.248 18:58:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1533536 00:05:59.248 18:58:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1533536 00:05:59.248 18:58:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.624 lslocks: write error 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1533536 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1533536 ']' 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1533536 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1533536 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1533536' 00:06:00.624 killing process with pid 1533536 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1533536 00:06:00.624 18:58:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1533536 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1533664 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1533664 ']' 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1533664 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1533664 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1533664' 00:06:01.558 killing process with pid 1533664 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1533664 00:06:01.558 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1533664 00:06:02.492 00:06:02.492 real 0m5.511s 00:06:02.492 user 0m5.760s 00:06:02.492 sys 0m1.920s 00:06:02.492 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.492 18:58:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.492 ************************************ 00:06:02.492 END TEST non_locking_app_on_locked_coremask 00:06:02.492 ************************************ 00:06:02.492 18:58:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:02.492 18:58:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.492 18:58:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.492 18:58:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.492 ************************************ 00:06:02.492 START TEST locking_app_on_unlocked_coremask 00:06:02.492 ************************************ 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1534228 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1534228 /var/tmp/spdk.sock 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1534228 ']' 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.492 18:58:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.492 [2024-07-24 18:58:07.964665] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:02.492 [2024-07-24 18:58:07.964753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534228 ] 00:06:02.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.493 [2024-07-24 18:58:08.060882] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.493 [2024-07-24 18:58:08.060959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.752 [2024-07-24 18:58:08.265746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.347 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.347 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.347 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1534364 00:06:03.347 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.347 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1534364 /var/tmp/spdk2.sock 00:06:03.347 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1534364 ']' 00:06:03.348 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.348 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.348 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.348 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.348 18:58:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.619 [2024-07-24 18:58:09.056902] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:03.619 [2024-07-24 18:58:09.057005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534364 ] 00:06:03.619 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.619 [2024-07-24 18:58:09.229442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.185 [2024-07-24 18:58:09.639026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.121 18:58:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.121 18:58:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:05.121 18:58:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1534364 00:06:05.121 18:58:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1534364 00:06:05.121 18:58:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.697 lslocks: write error 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1534228 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1534228 ']' 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1534228 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1534228 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1534228' 00:06:05.697 killing process with pid 1534228 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1534228 00:06:05.697 18:58:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1534228 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1534364 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1534364 ']' 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1534364 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1534364 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1534364' 00:06:07.072 killing process with pid 1534364 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1534364 00:06:07.072 18:58:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1534364 00:06:07.639 00:06:07.639 real 0m5.292s 00:06:07.639 user 0m5.883s 00:06:07.639 sys 0m1.672s 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.639 ************************************ 00:06:07.639 END TEST locking_app_on_unlocked_coremask 00:06:07.639 ************************************ 00:06:07.639 18:58:13 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:07.639 18:58:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.639 18:58:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.639 18:58:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.639 ************************************ 00:06:07.639 START TEST locking_app_on_locked_coremask 00:06:07.639 ************************************ 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1534924 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1534924 /var/tmp/spdk.sock 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1534924 ']' 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.639 18:58:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.639 [2024-07-24 18:58:13.319443] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:07.639 [2024-07-24 18:58:13.319531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534924 ] 00:06:07.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.898 [2024-07-24 18:58:13.419768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.157 [2024-07-24 18:58:13.622627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1534940 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1534940 /var/tmp/spdk2.sock 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1534940 /var/tmp/spdk2.sock 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1534940 /var/tmp/spdk2.sock 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1534940 ']' 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.415 18:58:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.415 [2024-07-24 18:58:14.102775] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:08.415 [2024-07-24 18:58:14.102868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534940 ] 00:06:08.674 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.674 [2024-07-24 18:58:14.270837] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1534924 has claimed it. 00:06:08.674 [2024-07-24 18:58:14.270973] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1534940) - No such process 00:06:09.608 ERROR: process (pid: 1534940) is no longer running 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1534924 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1534924 00:06:09.608 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.175 lslocks: write error 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1534924 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1534924 ']' 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1534924 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1534924 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1534924' 00:06:10.176 killing process with pid 1534924 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1534924 00:06:10.176 18:58:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1534924 00:06:10.743 00:06:10.743 real 0m2.983s 00:06:10.743 user 0m3.412s 00:06:10.743 sys 0m1.093s 00:06:10.743 18:58:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.743 18:58:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.743 ************************************ 00:06:10.743 END TEST locking_app_on_locked_coremask 00:06:10.743 ************************************ 00:06:10.743 18:58:16 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.743 18:58:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.743 18:58:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.743 18:58:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.743 ************************************ 00:06:10.743 START TEST locking_overlapped_coremask 00:06:10.743 ************************************ 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1535233 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1535233 /var/tmp/spdk.sock 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1535233 ']' 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.743 18:58:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.744 [2024-07-24 18:58:16.370998] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:10.744 [2024-07-24 18:58:16.371108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535233 ] 00:06:10.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.002 [2024-07-24 18:58:16.473646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.002 [2024-07-24 18:58:16.658979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.002 [2024-07-24 18:58:16.659039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.002 [2024-07-24 18:58:16.659043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.377 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.377 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.377 18:58:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1535459 00:06:12.377 18:58:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1535459 /var/tmp/spdk2.sock 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1535459 /var/tmp/spdk2.sock 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1535459 /var/tmp/spdk2.sock 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1535459 ']' 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.378 18:58:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.378 [2024-07-24 18:58:17.759534] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:12.378 [2024-07-24 18:58:17.759650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535459 ] 00:06:12.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.378 [2024-07-24 18:58:17.913625] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1535233 has claimed it. 00:06:12.378 [2024-07-24 18:58:17.913707] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.311 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1535459) - No such process 00:06:13.311 ERROR: process (pid: 1535459) is no longer running 00:06:13.311 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.311 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:13.311 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:13.311 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.311 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1535233 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1535233 ']' 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1535233 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1535233 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1535233' 00:06:13.312 killing process with pid 1535233 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1535233 00:06:13.312 18:58:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1535233 00:06:13.878 00:06:13.878 real 0m3.037s 00:06:13.878 user 0m8.726s 00:06:13.878 sys 0m0.742s 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.878 ************************************ 00:06:13.878 END TEST locking_overlapped_coremask 00:06:13.878 ************************************ 00:06:13.878 18:58:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:13.878 18:58:19 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.878 18:58:19 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.878 18:58:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.878 ************************************ 00:06:13.878 START TEST locking_overlapped_coremask_via_rpc 00:06:13.878 ************************************ 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1535667 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1535667 /var/tmp/spdk.sock 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1535667 ']' 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.878 18:58:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.878 [2024-07-24 18:58:19.483170] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:13.878 [2024-07-24 18:58:19.483280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535667 ] 00:06:13.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.137 [2024-07-24 18:58:19.574841] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.137 [2024-07-24 18:58:19.574883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.137 [2024-07-24 18:58:19.791037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.137 [2024-07-24 18:58:19.791101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.137 [2024-07-24 18:58:19.791106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1535793 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1535793 /var/tmp/spdk2.sock 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1535793 ']' 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.704 18:58:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.704 [2024-07-24 18:58:20.177491] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:14.704 [2024-07-24 18:58:20.177622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535793 ] 00:06:14.704 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.704 [2024-07-24 18:58:20.302970] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.704 [2024-07-24 18:58:20.303033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.963 [2024-07-24 18:58:20.598250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.963 [2024-07-24 18:58:20.598311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.963 [2024-07-24 18:58:20.598314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.337 [2024-07-24 18:58:21.631593] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1535667 has claimed it. 00:06:16.337 request: 00:06:16.337 { 00:06:16.337 "method": "framework_enable_cpumask_locks", 00:06:16.337 "req_id": 1 00:06:16.337 } 00:06:16.337 Got JSON-RPC error response 00:06:16.337 response: 00:06:16.337 { 00:06:16.337 "code": -32603, 00:06:16.337 "message": "Failed to claim CPU core: 2" 00:06:16.337 } 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1535667 /var/tmp/spdk.sock 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1535667 ']' 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1535793 /var/tmp/spdk2.sock 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1535793 ']' 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.337 18:58:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:16.903 00:06:16.903 real 0m3.040s 00:06:16.903 user 0m2.014s 00:06:16.903 sys 0m0.261s 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.903 18:58:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.903 ************************************ 00:06:16.903 END TEST locking_overlapped_coremask_via_rpc 00:06:16.903 ************************************ 00:06:16.903 18:58:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:16.903 18:58:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1535667 ]] 00:06:16.903 18:58:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1535667 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1535667 ']' 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1535667 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1535667 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1535667' 00:06:16.903 killing process with pid 1535667 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1535667 00:06:16.903 18:58:22 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1535667 00:06:17.472 18:58:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1535793 ]] 00:06:17.472 18:58:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1535793 00:06:17.472 18:58:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1535793 ']' 00:06:17.472 18:58:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1535793 00:06:17.472 18:58:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:17.472 18:58:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.472 18:58:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1535793 00:06:17.742 18:58:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:17.742 18:58:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:17.742 18:58:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1535793' 00:06:17.742 killing process with pid 1535793 00:06:17.742 18:58:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1535793 00:06:17.742 18:58:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1535793 00:06:18.369 18:58:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.369 18:58:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.369 18:58:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1535667 ]] 00:06:18.369 18:58:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1535667 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1535667 ']' 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1535667 00:06:18.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1535667) - No such process 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1535667 is not found' 00:06:18.369 Process with pid 1535667 is not found 00:06:18.369 18:58:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1535793 ]] 00:06:18.369 18:58:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1535793 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1535793 ']' 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1535793 00:06:18.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1535793) - No such process 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1535793 is not found' 00:06:18.369 Process with pid 1535793 is not found 00:06:18.369 18:58:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.369 00:06:18.369 real 0m26.215s 00:06:18.369 user 0m46.722s 00:06:18.369 sys 0m8.672s 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.369 18:58:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.369 ************************************ 00:06:18.369 END TEST cpu_locks 00:06:18.369 ************************************ 00:06:18.369 00:06:18.369 real 0m57.112s 00:06:18.369 user 1m49.345s 00:06:18.369 sys 0m15.091s 00:06:18.369 18:58:23 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.369 18:58:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.369 ************************************ 00:06:18.369 END TEST event 00:06:18.369 ************************************ 00:06:18.369 18:58:23 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.369 18:58:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.369 18:58:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.370 18:58:23 -- common/autotest_common.sh@10 -- # set +x 00:06:18.370 ************************************ 00:06:18.370 START TEST thread 00:06:18.370 ************************************ 00:06:18.370 18:58:23 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.370 * Looking for test storage... 00:06:18.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:18.370 18:58:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.370 18:58:24 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:18.370 18:58:24 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.370 18:58:24 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.629 ************************************ 00:06:18.629 START TEST thread_poller_perf 00:06:18.629 ************************************ 00:06:18.629 18:58:24 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.629 [2024-07-24 18:58:24.095232] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:18.629 [2024-07-24 18:58:24.095377] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536301 ] 00:06:18.629 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.629 [2024-07-24 18:58:24.238248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.888 [2024-07-24 18:58:24.445338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.888 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.264 ====================================== 00:06:20.264 busy:2732739327 (cyc) 00:06:20.264 total_run_count: 159000 00:06:20.264 tsc_hz: 2700000000 (cyc) 00:06:20.264 ====================================== 00:06:20.264 poller_cost: 17187 (cyc), 6365 (nsec) 00:06:20.264 00:06:20.264 real 0m1.586s 00:06:20.264 user 0m1.411s 00:06:20.264 sys 0m0.163s 00:06:20.264 18:58:25 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.264 18:58:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 ************************************ 00:06:20.264 END TEST thread_poller_perf 00:06:20.264 ************************************ 00:06:20.264 18:58:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.264 18:58:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:20.264 18:58:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.264 18:58:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 ************************************ 00:06:20.264 START TEST thread_poller_perf 00:06:20.264 ************************************ 00:06:20.264 18:58:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.264 [2024-07-24 18:58:25.749401] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:20.264 [2024-07-24 18:58:25.749543] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536504 ] 00:06:20.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.264 [2024-07-24 18:58:25.890702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.522 [2024-07-24 18:58:26.104406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.522 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.898 ====================================== 00:06:21.898 busy:2705680060 (cyc) 00:06:21.898 total_run_count: 1890000 00:06:21.898 tsc_hz: 2700000000 (cyc) 00:06:21.898 ====================================== 00:06:21.898 poller_cost: 1431 (cyc), 530 (nsec) 00:06:21.898 00:06:21.898 real 0m1.582s 00:06:21.898 user 0m1.411s 00:06:21.898 sys 0m0.158s 00:06:21.898 18:58:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.898 18:58:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.898 ************************************ 00:06:21.898 END TEST thread_poller_perf 00:06:21.898 ************************************ 00:06:21.898 18:58:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:21.898 00:06:21.898 real 0m3.367s 00:06:21.898 user 0m2.903s 00:06:21.898 sys 0m0.453s 00:06:21.898 18:58:27 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.898 18:58:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.898 ************************************ 00:06:21.898 END TEST thread 00:06:21.898 ************************************ 00:06:21.898 18:58:27 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:21.898 18:58:27 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.898 18:58:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.898 18:58:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.898 18:58:27 -- common/autotest_common.sh@10 -- # set +x 00:06:21.898 ************************************ 00:06:21.898 START TEST app_cmdline 00:06:21.898 ************************************ 00:06:21.898 18:58:27 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.898 * Looking for test storage... 00:06:21.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.898 18:58:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.898 18:58:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1536782 00:06:21.898 18:58:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.898 18:58:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1536782 00:06:21.898 18:58:27 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1536782 ']' 00:06:21.898 18:58:27 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.898 18:58:27 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.898 18:58:27 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.898 18:58:27 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.898 18:58:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.898 [2024-07-24 18:58:27.543686] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:21.898 [2024-07-24 18:58:27.543799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536782 ] 00:06:21.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.157 [2024-07-24 18:58:27.642853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.157 [2024-07-24 18:58:27.845646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.529 18:58:28 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.529 18:58:28 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:23.529 18:58:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:23.787 { 00:06:23.787 "version": "SPDK v24.09-pre git sha1 74f92fe69", 00:06:23.787 "fields": { 00:06:23.787 "major": 24, 00:06:23.787 "minor": 9, 00:06:23.787 "patch": 0, 00:06:23.787 "suffix": "-pre", 00:06:23.787 "commit": "74f92fe69" 00:06:23.787 } 00:06:23.787 } 00:06:23.787 18:58:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:23.787 18:58:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:23.787 18:58:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:23.788 18:58:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:23.788 18:58:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.788 18:58:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:23.788 18:58:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.788 18:58:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:23.788 18:58:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:23.788 18:58:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:23.788 18:58:29 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:24.353 request: 00:06:24.353 { 00:06:24.353 "method": "env_dpdk_get_mem_stats", 00:06:24.353 "req_id": 1 00:06:24.353 } 00:06:24.353 Got JSON-RPC error response 00:06:24.353 response: 00:06:24.353 { 00:06:24.353 "code": -32601, 00:06:24.353 "message": "Method not found" 00:06:24.353 } 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.353 18:58:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1536782 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1536782 ']' 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1536782 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1536782 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1536782' 00:06:24.353 killing process with pid 1536782 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@969 -- # kill 1536782 00:06:24.353 18:58:29 app_cmdline -- common/autotest_common.sh@974 -- # wait 1536782 00:06:24.920 00:06:24.920 real 0m2.996s 00:06:24.920 user 0m3.987s 00:06:24.920 sys 0m0.762s 00:06:24.920 18:58:30 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.920 18:58:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.920 ************************************ 00:06:24.920 END TEST app_cmdline 00:06:24.920 ************************************ 00:06:24.920 18:58:30 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:24.920 18:58:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.920 18:58:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.920 18:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.920 ************************************ 00:06:24.920 START TEST version 00:06:24.920 ************************************ 00:06:24.920 18:58:30 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:24.920 * Looking for test storage... 00:06:24.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:24.920 18:58:30 version -- app/version.sh@17 -- # get_header_version major 00:06:24.920 18:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.920 18:58:30 version -- app/version.sh@17 -- # major=24 00:06:24.920 18:58:30 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.920 18:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.920 18:58:30 version -- app/version.sh@18 -- # minor=9 00:06:24.920 18:58:30 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.920 18:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.920 18:58:30 version -- app/version.sh@19 -- # patch=0 00:06:24.920 18:58:30 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.920 18:58:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.920 18:58:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.920 18:58:30 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.920 18:58:30 version -- app/version.sh@22 -- # version=24.9 00:06:24.920 18:58:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.920 18:58:30 version -- app/version.sh@28 -- # version=24.9rc0 00:06:24.920 18:58:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:24.920 18:58:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:25.180 18:58:30 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:25.180 18:58:30 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:25.180 00:06:25.180 real 0m0.163s 00:06:25.180 user 0m0.094s 00:06:25.180 sys 0m0.097s 00:06:25.180 18:58:30 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.180 18:58:30 version -- common/autotest_common.sh@10 -- # set +x 00:06:25.180 ************************************ 00:06:25.180 END TEST version 00:06:25.180 ************************************ 00:06:25.180 18:58:30 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@202 -- # uname -s 00:06:25.180 18:58:30 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:25.180 18:58:30 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:25.180 18:58:30 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:25.180 18:58:30 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:25.180 18:58:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:25.180 18:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:25.180 18:58:30 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:25.180 18:58:30 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:25.180 18:58:30 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:25.180 18:58:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:25.180 18:58:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.180 18:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:25.180 ************************************ 00:06:25.180 START TEST nvmf_tcp 00:06:25.180 ************************************ 00:06:25.180 18:58:30 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:25.180 * Looking for test storage... 00:06:25.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:25.180 18:58:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:25.180 18:58:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:25.180 18:58:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:25.180 18:58:30 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:25.180 18:58:30 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.180 18:58:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.180 ************************************ 00:06:25.180 START TEST nvmf_target_core 00:06:25.180 ************************************ 00:06:25.180 18:58:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:25.440 * Looking for test storage... 00:06:25.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.440 ************************************ 00:06:25.440 START TEST nvmf_abort 00:06:25.440 ************************************ 00:06:25.440 18:58:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:25.440 * Looking for test storage... 00:06:25.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.440 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:25.441 18:58:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:27.984 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:28.242 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:28.242 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:28.242 Found net devices under 0000:84:00.0: cvl_0_0 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:28.242 Found net devices under 0000:84:00.1: cvl_0_1 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:28.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:06:28.242 00:06:28.242 --- 10.0.0.2 ping statistics --- 00:06:28.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.242 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:06:28.242 00:06:28.242 --- 10.0.0.1 ping statistics --- 00:06:28.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.242 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1538983 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1538983 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1538983 ']' 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.242 18:58:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.242 [2024-07-24 18:58:33.926826] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:28.242 [2024-07-24 18:58:33.926932] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.515 [2024-07-24 18:58:34.018199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.515 [2024-07-24 18:58:34.159207] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.515 [2024-07-24 18:58:34.159282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.515 [2024-07-24 18:58:34.159301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.515 [2024-07-24 18:58:34.159317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.515 [2024-07-24 18:58:34.159330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.515 [2024-07-24 18:58:34.159465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.515 [2024-07-24 18:58:34.159524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.515 [2024-07-24 18:58:34.159528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 [2024-07-24 18:58:34.334417] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 Malloc0 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 Delay0 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 [2024-07-24 18:58:34.414759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.774 18:58:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:29.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.032 [2024-07-24 18:58:34.550778] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:31.562 Initializing NVMe Controllers 00:06:31.562 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:31.562 controller IO queue size 128 less than required 00:06:31.562 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:31.562 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:31.562 Initialization complete. Launching workers. 00:06:31.562 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 24133 00:06:31.562 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 24194, failed to submit 62 00:06:31.562 success 24137, unsuccess 57, failed 0 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:31.562 rmmod nvme_tcp 00:06:31.562 rmmod nvme_fabrics 00:06:31.562 rmmod nvme_keyring 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1538983 ']' 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1538983 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1538983 ']' 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1538983 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1538983 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1538983' 00:06:31.562 killing process with pid 1538983 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1538983 00:06:31.562 18:58:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1538983 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.562 18:58:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.464 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:33.464 00:06:33.464 real 0m8.178s 00:06:33.464 user 0m11.017s 00:06:33.464 sys 0m3.214s 00:06:33.464 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.464 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.464 ************************************ 00:06:33.464 END TEST nvmf_abort 00:06:33.464 ************************************ 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.723 ************************************ 00:06:33.723 START TEST nvmf_ns_hotplug_stress 00:06:33.723 ************************************ 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:33.723 * Looking for test storage... 00:06:33.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:33.723 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:33.724 18:58:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:37.081 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.081 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:37.082 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:37.082 Found net devices under 0000:84:00.0: cvl_0_0 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:37.082 Found net devices under 0000:84:00.1: cvl_0_1 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:37.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:06:37.082 00:06:37.082 --- 10.0.0.2 ping statistics --- 00:06:37.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.082 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:06:37.082 00:06:37.082 --- 10.0.0.1 ping statistics --- 00:06:37.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.082 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1541424 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1541424 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1541424 ']' 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.082 18:58:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.082 [2024-07-24 18:58:42.318315] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:06:37.082 [2024-07-24 18:58:42.318501] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.082 [2024-07-24 18:58:42.443759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.082 [2024-07-24 18:58:42.583192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.082 [2024-07-24 18:58:42.583273] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.083 [2024-07-24 18:58:42.583293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.083 [2024-07-24 18:58:42.583311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.083 [2024-07-24 18:58:42.583325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.083 [2024-07-24 18:58:42.583401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.083 [2024-07-24 18:58:42.583475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.083 [2024-07-24 18:58:42.583481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:38.016 18:58:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:38.582 [2024-07-24 18:58:44.100460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:38.582 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:39.146 18:58:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:39.403 [2024-07-24 18:58:45.075635] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.403 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:39.969 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:40.227 Malloc0 00:06:40.485 18:58:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:41.050 Delay0 00:06:41.050 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.308 18:58:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:41.873 NULL1 00:06:41.873 18:58:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:42.439 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1542119 00:06:42.439 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:42.439 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:42.439 18:58:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.813 Read completed with error (sct=0, sc=11) 00:06:43.813 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.070 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:44.070 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:44.329 true 00:06:44.329 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:44.329 18:58:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.895 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.153 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:45.410 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:45.410 18:58:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:45.974 true 00:06:45.974 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:45.974 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.539 18:58:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:46.798 18:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:46.798 18:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:47.363 true 00:06:47.363 18:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:47.363 18:58:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.928 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.185 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:48.185 18:58:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:49.118 true 00:06:49.118 18:58:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:49.118 18:58:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.085 18:58:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.353 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:50.353 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:50.922 true 00:06:50.922 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:50.922 18:58:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.295 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.553 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:52.553 18:58:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:52.812 true 00:06:52.812 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:52.812 18:58:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.378 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.639 [2024-07-24 18:58:59.292461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.292595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.292682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.292760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.292847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.292928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.293957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.294971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.639 [2024-07-24 18:58:59.295797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.295871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.295949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.296978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.297928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.298953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.299968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.300960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.301029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.301103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.301815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.301894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.301964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.640 [2024-07-24 18:58:59.302964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.303996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.304981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.305972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.306973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.307907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.308738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.308819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.308892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.308959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.309959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.641 [2024-07-24 18:58:59.310036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.310935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.311989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.312944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.313016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.313089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.313159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.313226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.313297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.314971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.315937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.642 [2024-07-24 18:58:59.316994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.317988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.318799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:53.643 [2024-07-24 18:58:59.319798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.319935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.320015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.320079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.320144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.320229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:53.643 [2024-07-24 18:58:59.320879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:53.643 [2024-07-24 18:58:59.320960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.321972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.322954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.323936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.643 [2024-07-24 18:58:59.324021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.324981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.325853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.326713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.326807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.326880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.326962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.327955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.328937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.329945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.330025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.330097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.330174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.330249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.330323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.330398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.644 [2024-07-24 18:58:59.330509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.330586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.330657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.330742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.330832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.330910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.330987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.645 [2024-07-24 18:58:59.331791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.332930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.333933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.921 [2024-07-24 18:58:59.334847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.334919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.334996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.335942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.336946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.337025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.338989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.339956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.340933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.341949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.342023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.342104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.342197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.342266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.922 [2024-07-24 18:58:59.342342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.342994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.343928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.344568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.345934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.346989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.347977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.348948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.923 [2024-07-24 18:58:59.349665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.349740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.349823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.349898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.349971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.350976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.351957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.352984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.353737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.354871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.354956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.355930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.356003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.356079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.356157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.356233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.356318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.356395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.924 [2024-07-24 18:58:59.356476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.356559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.356633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.356710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.356788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.356864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.356947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.357988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.358957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.359980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.360930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.925 [2024-07-24 18:58:59.361644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.361724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.361796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.361869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.361950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.362961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.363969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.364808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.365549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.365627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.365697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.365770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.365846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.365922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.365999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.366985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.367968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.926 [2024-07-24 18:58:59.368679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.368753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.368833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.368906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.368983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.369932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.370000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.370076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.370155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.370234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.370307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.370376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.370453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.371584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.371664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.371736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.371808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.371880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.371957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.372934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.373934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.374976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.375054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.927 [2024-07-24 18:58:59.375127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.375995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.376958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.377923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.378965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.379982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.380932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.381644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.382745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.382825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.382899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.382970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.383049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.928 [2024-07-24 18:58:59.383126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.383958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.384961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.385974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.386993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.387925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.388943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.389024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.389746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.389837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.389915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.389990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.929 [2024-07-24 18:58:59.390631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.390708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.390780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.390855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.390928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.391964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.392952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.393942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.394938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.395961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.396922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.397002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.930 [2024-07-24 18:58:59.397078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.397921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.398965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.399649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.400830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.400912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.401931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.402962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.403984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.931 [2024-07-24 18:58:59.404664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.404742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.404820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.404886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.404958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.405988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 Message suppressed 999 times: [2024-07-24 18:58:59.406387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 Read completed with error (sct=0, sc=15) 00:06:53.932 [2024-07-24 18:58:59.406478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.406959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.407035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.407109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.407187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.407264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.407962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.408935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.409999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.410984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.411060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.411133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.932 [2024-07-24 18:58:59.411218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.411932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.412883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.413961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.414957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.415951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.416990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.933 [2024-07-24 18:58:59.417827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.417911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.419940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.420973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.421956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.422969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.423937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.424947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.425029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.425113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.425192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.425268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.425344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.425419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.934 [2024-07-24 18:58:59.425512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.426950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.427969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.428975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.429975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.430928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.431994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.935 [2024-07-24 18:58:59.432970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.433974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.434970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.435922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.436002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.436077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.436149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.436230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.436305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.436387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.437543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.437640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.437718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.437798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.437874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.437947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.438991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.439969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.936 [2024-07-24 18:58:59.440714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.440794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.440870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.440945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.441989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.442967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.443921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.444668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.444763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.444842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.444918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.444991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.445925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.446977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.447050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.447130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.447208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.937 [2024-07-24 18:58:59.447280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.447996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.448948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.449991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.450953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.451989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.452970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.453975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.454055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.454125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.454200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.938 [2024-07-24 18:58:59.454278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.454352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.454426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.454509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.454582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.454654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.454724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.455857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.455942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.456995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.457989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.458932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.459973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.460804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.461999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.462077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.462167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.939 [2024-07-24 18:58:59.462243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.462316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.462406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.463988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.464970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.465946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.466934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.467977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.468974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.940 [2024-07-24 18:58:59.469733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.469806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.469883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.469963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.470941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.471946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.472963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.473034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.474954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.475929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.476979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.477057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.477139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.477217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.477297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.941 [2024-07-24 18:58:59.477371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.477453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.477536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.477614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.477684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.477770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.477846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.477923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.478956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.479986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.480607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.481952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.482927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.483933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.942 [2024-07-24 18:58:59.484014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.484969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.485980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.486960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.487959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.488925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.489788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.490908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.490999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.943 [2024-07-24 18:58:59.491929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.492929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.493992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.494964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.495727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:53.944 [2024-07-24 18:58:59.496011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.496929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.497966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.944 [2024-07-24 18:58:59.498605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.498677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.498764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.498840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.498919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.499985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.500811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.502989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.503934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.504977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.505978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.506054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.506134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.506209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.506279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.506354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.945 [2024-07-24 18:58:59.506443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.506522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.506597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.506676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.506760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.506830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.506898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.506968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.507945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.508025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.508104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.508177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.508255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.508326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.508402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.508486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.509926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.510965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.511995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.512986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.946 [2024-07-24 18:58:59.513974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.514946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.515992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.516957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.517914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.518010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.518973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.519943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.520998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.521076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.521147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.521219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.521296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.521373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.947 [2024-07-24 18:58:59.521459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.521544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.521619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.521703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.521777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.521849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.521936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.522961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.523839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.524937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.525926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.526676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.527631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.527719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.527796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.527875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.527961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.948 [2024-07-24 18:58:59.528761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.528836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.528921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.528997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.529978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.530971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.531915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.532990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.533944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.534977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.535051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.535131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.535206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.949 [2024-07-24 18:58:59.535279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.535989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.536988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.537652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.538876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.538959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.539957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.540980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.541996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.950 [2024-07-24 18:58:59.542611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.542689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.542761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.542825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.542898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.542972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.543720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.544966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.545044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.545125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.545205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.545279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.545994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.546961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.547991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.548998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.951 [2024-07-24 18:58:59.549839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.549902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.549972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.550865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.551973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.552927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.553937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.554976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.555966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.557236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.557327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.557403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.557490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.952 [2024-07-24 18:58:59.557566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.557640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.557729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.557805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.557884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.557963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.558923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.559944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.560948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.561936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.953 [2024-07-24 18:58:59.562990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.563990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.564927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.565954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.566994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.567068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.567141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.567207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.567284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.568961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.569951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.570032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.570110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.570187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.570273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.570348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.954 [2024-07-24 18:58:59.570425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.570522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.570597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.570676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.570748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.570830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.570908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.570992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.571942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.572953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.573991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.574981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.575950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.576920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.577005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.577079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.577153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.955 [2024-07-24 18:58:59.577233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.577932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.578014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.578088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.578168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.578245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.578317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.578402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.579557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.579644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.579721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.579800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.579873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.579963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.580947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.581985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.582917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.583968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:53.956 [2024-07-24 18:58:59.584713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.584952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.585025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.585111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.585187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.585271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.585350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.956 [2024-07-24 18:58:59.585441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.585518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.585601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.585673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.585748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.585831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.585908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.585987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.586619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.586701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.586776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.586849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.586925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.586990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.587998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.588977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.589992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.590955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.591945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.592999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.593077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.593157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.593230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.957 [2024-07-24 18:58:59.593306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.593991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.594980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.595062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.595129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.595203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.596994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.958 [2024-07-24 18:58:59.597633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.597706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.597787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.597877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.597960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.598979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.599059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.599146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.599222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:53.959 [2024-07-24 18:58:59.599295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.599968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.600988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.601945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.602979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.237 [2024-07-24 18:58:59.603991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.604072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.604149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.604226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.604316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.604390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.605952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.606932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.607929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.608933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.609987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.610932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.238 [2024-07-24 18:58:59.611559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.611641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.611723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.611789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.611864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.611935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.612992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.613954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.614961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.615939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.616981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.617978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.618052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.618127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.618200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.618291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.239 [2024-07-24 18:58:59.618366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.618996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.619907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.620975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.621952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.622778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.623983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.624977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.240 [2024-07-24 18:58:59.625051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.625966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.626941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.627943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.628986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.629978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.630769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.631772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.631852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.631924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.241 [2024-07-24 18:58:59.632834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.632911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.632987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.633954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.634932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.635955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.636779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.242 [2024-07-24 18:58:59.637920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.638927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.639621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.640501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.640596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.640679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.640745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.640822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.640899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.640972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.641971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.642982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.643986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.644941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.645025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.645102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.645180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.243 [2024-07-24 18:58:59.645254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.645327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.645409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.645713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.645805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.645906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.645995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.646988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.647979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.648960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.649035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.649113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.649187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.649263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.649944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.650944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.651935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.244 [2024-07-24 18:58:59.652809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.652890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.652962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.653947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.654847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.655975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.656044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.656117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.656190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.656281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.656358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.657926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.658991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.659938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.660011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.660089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.660168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.660245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.660321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.660402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.245 [2024-07-24 18:58:59.660508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.660580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.660657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.660746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.660817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.660906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.660975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.661951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.662992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.663955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.664930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.665948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.666013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.666089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.667992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.668067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.668140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.668212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.246 [2024-07-24 18:58:59.668289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.668965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.669981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.670965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.671982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:54.247 [2024-07-24 18:58:59.672480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.672940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.673542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.674971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.675043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.675116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.675181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.675254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.675329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.675412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.247 [2024-07-24 18:58:59.675516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.675590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.675661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.675760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.675837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.675918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.675995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.676938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.677977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.678932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.679009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.679854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.679935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.680991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.681993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.248 [2024-07-24 18:58:59.682076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.682941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.683957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.684868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.685973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.686048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.686117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.686182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.686255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.686328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.686945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.687939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.688984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.249 [2024-07-24 18:58:59.689717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.689793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.689882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.689963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.690990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.691869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.692695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.692794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.692877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.692949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.693967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.694978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.695933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.696979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.697056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.697133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.697207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.250 [2024-07-24 18:58:59.697295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.697370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.697461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.697545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.697626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.697704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.697970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.698942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.699007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.699082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.699161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.699788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.699869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.699944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.700944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.701917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.702974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.703993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.704709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.251 [2024-07-24 18:58:59.705550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.705628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.705723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.705804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.705878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.705951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.706946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.707964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.708993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.709976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.710971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.711930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.712004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.252 [2024-07-24 18:58:59.712614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.712696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.712775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.712849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.712921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.712997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.713965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.714971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.715936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.716962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.717036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.717115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.717187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.717271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.717346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.717425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.717514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.253 [2024-07-24 18:58:59.718906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.718972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.719965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.720980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.721918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.722990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.723959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.724779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.725931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.726006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.254 [2024-07-24 18:58:59.726075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.726935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.727977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.728999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.729948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.730023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.730089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.730163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.730240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.731997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.732990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.255 [2024-07-24 18:58:59.733068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.733932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.734984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.735992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.736927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.737933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.738963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.739942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.740030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.256 [2024-07-24 18:58:59.740105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.740981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.741064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.741139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.741217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.742944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.743985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.744968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.745983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.746949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.747025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.747107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.747180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.747257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.747553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.747642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.257 [2024-07-24 18:58:59.747719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.747799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.747873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.747946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.748838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.749568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.749661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.749733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.749811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.749889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.749954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.750991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.751937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.752962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.753981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.754062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.754138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.258 [2024-07-24 18:58:59.754215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.754289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.754363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.754472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.754548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.754826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.754903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.754989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.755958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.756927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.757966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.758043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.758116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.758201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:54.259 [2024-07-24 18:58:59.759370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.759462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.759531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.759605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.759680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 true 00:06:54.259 [2024-07-24 18:58:59.759752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.759824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.759905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.759983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.760973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.259 [2024-07-24 18:58:59.761969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.762985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.763928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.764931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.765995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.766945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.767020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.767095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.767174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.767247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.767940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.768950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.260 [2024-07-24 18:58:59.769641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.769735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.769806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.769878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.769958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.770952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.771993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.772835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.773955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.774971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.775046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.775119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.775201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.775277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.776979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.261 [2024-07-24 18:58:59.777664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.777767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.777847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.777919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.778960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.779950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:54.262 [2024-07-24 18:58:59.780099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 18:58:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.262 [2024-07-24 18:58:59.780320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.780999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.781972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.782966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.783957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.784029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.784125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.784200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.784277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.784352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.785063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.785145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.262 [2024-07-24 18:58:59.785218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.785997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.786944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.787974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.788949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.789933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.790971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.791038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.791111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.791185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.791259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.791332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.263 [2024-07-24 18:58:59.791408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.791506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.791591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.791661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.791757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.791836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.791915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.791986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.792601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.793801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.793882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.793960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.794931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.795985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.796987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.797949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.798870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.264 [2024-07-24 18:58:59.799678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.799767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.799844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.799920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.799999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.800945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.801777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.802683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.802770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.802848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.802924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.803981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.804967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.805946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.806981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.807056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.807133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.807207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.265 [2024-07-24 18:58:59.807284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.807361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.807446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.807540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.807820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.807903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.807981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.808928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.809987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.810929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.811002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.811082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.811157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.811231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.811302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.811385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.812933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.813929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.266 [2024-07-24 18:58:59.814901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.814973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.815934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.816950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.817963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.818039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.818114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.818183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.818255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.818326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.818398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.818495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.819959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.820995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.821937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.267 [2024-07-24 18:58:59.822698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.822795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.822872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.822949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.823944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.824024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.824100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.824175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.824250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.825989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.826953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.827978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.828955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.829028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.829108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.268 [2024-07-24 18:58:59.829173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.829937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.830943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.831547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.832934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.833968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.834966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.835955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.269 [2024-07-24 18:58:59.836728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.836812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.836888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.836964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.837945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.838950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.839996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.840928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.841001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.841087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.842940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.843963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.270 [2024-07-24 18:58:59.844713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.844786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.844857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.844931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.845990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.846952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.847949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:06:54.271 [2024-07-24 18:58:59.848392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.848920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.849751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.850626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.850708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.850789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.850864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.850935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.851951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.852017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.852092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.852166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.271 [2024-07-24 18:58:59.852236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.852993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.853941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.854958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.855992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.856955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.857956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.858029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.858646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.858727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.858798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.858873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.858953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.272 [2024-07-24 18:58:59.859788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.859873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.859959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.860978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.861976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.862939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.863945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.864959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.865939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.866025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.866103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.866182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.866260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.273 [2024-07-24 18:58:59.866334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.867966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.868967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.869966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.870998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.871976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.872959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.274 [2024-07-24 18:58:59.873688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.873766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.873841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.873923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.873999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.874934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.875937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.876945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.877028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.877103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.877170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.877240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.877312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.877385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.878600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.878685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.878763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.878850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.878924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.879968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.880933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.275 [2024-07-24 18:58:59.881648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.881744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.881822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.881907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.881990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.882974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.883977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.884971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.885049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.885124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.885875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.885964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.886951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.887972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.888963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.889036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.889109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.889180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.889260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.276 [2024-07-24 18:58:59.889343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.889944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.890809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.891979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.892998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.893944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.894650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.895623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.895699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.895797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.895869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.895946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.896987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.897060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.277 [2024-07-24 18:58:59.897132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.897979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.898993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.899937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.900951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.901935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.902934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.903007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.903081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.903156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.903236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.278 [2024-07-24 18:58:59.904975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.905934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.906950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.907993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.908977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.909944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.910987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.279 [2024-07-24 18:58:59.911055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.911728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.912577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.912691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.912764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.912846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.912921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.912994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.913067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.913159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.280 [2024-07-24 18:58:59.913237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.913998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.914967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.915038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.915108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.915201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.915311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.915388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.554 [2024-07-24 18:58:59.915487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.915551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.915619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.915685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.915774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.915864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.915953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.916940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.917725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.918983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.919927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.920945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.921936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.922011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.922084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.922161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.922235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.922328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.555 [2024-07-24 18:58:59.922407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.922507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.922588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.922661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.922773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.922849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.922926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.924930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.925990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.926989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.927953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.928994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.929989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.930064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.556 [2024-07-24 18:58:59.930141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.930216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.930304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.930380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.930469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.930538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.931965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.932950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.933937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.934974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.935933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.936839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 Message suppressed 999 times: [2024-07-24 18:58:59.936905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 Read completed with error (sct=0, sc=15) 00:06:54.557 [2024-07-24 18:58:59.936980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.937055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.937137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.937215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.937293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.557 [2024-07-24 18:58:59.937366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.937960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.938930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.939798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.940884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.940972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.941930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.942932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.943958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.558 [2024-07-24 18:58:59.944881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.944954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.945785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.946945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.947998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.948713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.949570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.949650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.949733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.949803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.949883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.949955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.950936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.951008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.951088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.951163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.951228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.559 [2024-07-24 18:58:59.951301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.951991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.952937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.953968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.954957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.955950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.956983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.957059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.957140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.957213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.957289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.957365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.957449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.560 [2024-07-24 18:58:59.957523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.957603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.957680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.957748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.957812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.957884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.957950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.958024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.958099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.959997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.960955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.961984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.962960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.963938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.964990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.965070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.965147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.965223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.561 [2024-07-24 18:58:59.965307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.965947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.966895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.967765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.967850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.967928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.968993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.969978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.970963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.971962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.562 [2024-07-24 18:58:59.972961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.973976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.974971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.975956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.976032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.976105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.976180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.976252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.976331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.977955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.978983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.979976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.980049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.980143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.980222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.563 [2024-07-24 18:58:59.980315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.980972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.981953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.982981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.983664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.984711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.984792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.984870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.984945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.985980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.986995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.987931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.988004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.564 [2024-07-24 18:58:59.988079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.565 [2024-07-24 18:58:59.988156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:54.565 [2024-07-24 18:58:59.988228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:06:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.498 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.756 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:55.756 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:56.321 true 00:06:56.321 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:56.321 18:59:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.886 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.886 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.887 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.145 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:57.145 18:59:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:57.710 true 00:06:57.710 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:57.710 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.275 18:59:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.533 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:58.533 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:58.533 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.097 true 00:06:59.098 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:06:59.098 18:59:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.664 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.929 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:59.929 18:59:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:00.495 true 00:07:00.495 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:00.495 18:59:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.867 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.139 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:02.139 18:59:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:02.705 true 00:07:02.705 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:02.705 18:59:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.079 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.337 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:04.337 18:59:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:04.596 true 00:07:04.596 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:04.596 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.528 18:59:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.785 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.785 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:05.785 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:06.043 true 00:07:06.043 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:06.043 18:59:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.976 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.976 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:06.976 18:59:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:07.541 true 00:07:07.541 18:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:07.541 18:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.798 18:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.363 18:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:08.363 18:59:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:08.621 true 00:07:08.621 18:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:08.621 18:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.880 18:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.443 18:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:09.443 18:59:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:09.443 true 00:07:09.700 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:09.700 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.958 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.524 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:10.524 18:59:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:10.781 true 00:07:11.038 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:11.038 18:59:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.411 18:59:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.669 Initializing NVMe Controllers 00:07:12.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:12.669 Controller IO queue size 128, less than required. 00:07:12.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:12.669 Controller IO queue size 128, less than required. 00:07:12.669 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:12.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:12.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:12.669 Initialization complete. Launching workers. 00:07:12.669 ======================================================== 00:07:12.669 Latency(us) 00:07:12.669 Device Information : IOPS MiB/s Average min max 00:07:12.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3313.57 1.62 27008.36 3598.45 1128748.92 00:07:12.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10474.23 5.11 12220.70 2234.51 745137.31 00:07:12.669 ======================================================== 00:07:12.669 Total : 13787.80 6.73 15774.56 2234.51 1128748.92 00:07:12.669 00:07:12.669 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:12.669 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:13.234 true 00:07:13.234 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1542119 00:07:13.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1542119) - No such process 00:07:13.234 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1542119 00:07:13.234 18:59:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.799 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.056 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:14.056 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:14.056 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:14.056 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.056 18:59:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:14.622 null0 00:07:14.622 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.622 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.622 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:14.880 null1 00:07:14.880 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.880 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.880 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:15.138 null2 00:07:15.138 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:15.138 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:15.138 18:59:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:15.703 null3 00:07:15.703 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:15.703 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:15.703 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:16.287 null4 00:07:16.287 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.287 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.287 18:59:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:16.560 null5 00:07:16.561 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:16.561 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:16.561 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:16.818 null6 00:07:17.075 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.075 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.075 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:17.075 null7 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1546206 1546207 1546209 1546211 1546213 1546215 1546217 1546219 00:07:17.334 18:59:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.593 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.856 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.121 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.121 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.121 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.121 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.121 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.121 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.380 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.380 18:59:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.380 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:18.638 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:18.897 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.897 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:18.897 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:18.897 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:18.897 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:18.897 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.897 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.154 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.412 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.412 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.412 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.412 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.412 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.412 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.412 18:59:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:19.412 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:19.412 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:19.412 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:19.670 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.671 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.671 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:19.929 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.187 18:59:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.445 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.702 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:20.702 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.702 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:20.702 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:20.960 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.960 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.960 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:20.960 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.960 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.961 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.219 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.476 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.477 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.477 18:59:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.477 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.477 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.477 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.477 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.477 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.734 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.735 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.993 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.252 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.252 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.252 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.252 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.252 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.252 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.252 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.510 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.510 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.510 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.510 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.510 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.510 18:59:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.510 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.510 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.510 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.510 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.510 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.510 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.510 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.768 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.769 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.769 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.769 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.769 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.769 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.769 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.026 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.026 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.026 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.026 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.026 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.027 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.027 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.027 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.027 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.285 18:59:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.542 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.542 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.542 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.542 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.542 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.542 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.542 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:23.800 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.801 rmmod nvme_tcp 00:07:23.801 rmmod nvme_fabrics 00:07:23.801 rmmod nvme_keyring 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1541424 ']' 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1541424 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1541424 ']' 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1541424 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.801 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1541424 00:07:24.059 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:24.059 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:24.059 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1541424' 00:07:24.059 killing process with pid 1541424 00:07:24.059 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1541424 00:07:24.059 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1541424 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.317 18:59:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.850 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.850 00:07:26.850 real 0m52.725s 00:07:26.850 user 3m57.390s 00:07:26.850 sys 0m18.389s 00:07:26.850 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.850 18:59:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 ************************************ 00:07:26.850 END TEST nvmf_ns_hotplug_stress 00:07:26.850 ************************************ 00:07:26.850 18:59:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:26.850 18:59:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.850 18:59:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.850 18:59:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.850 ************************************ 00:07:26.850 START TEST nvmf_delete_subsystem 00:07:26.850 ************************************ 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:26.850 * Looking for test storage... 00:07:26.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.850 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.851 18:59:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:29.397 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:29.397 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:29.397 Found net devices under 0000:84:00.0: cvl_0_0 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:29.397 Found net devices under 0000:84:00.1: cvl_0_1 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.397 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:07:29.398 00:07:29.398 --- 10.0.0.2 ping statistics --- 00:07:29.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.398 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:07:29.398 00:07:29.398 --- 10.0.0.1 ping statistics --- 00:07:29.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.398 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.398 18:59:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1549126 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1549126 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1549126 ']' 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.398 18:59:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.398 [2024-07-24 18:59:35.062960] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:07:29.398 [2024-07-24 18:59:35.063051] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.679 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.679 [2024-07-24 18:59:35.168869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.959 [2024-07-24 18:59:35.363807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.960 [2024-07-24 18:59:35.363874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.960 [2024-07-24 18:59:35.363896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.960 [2024-07-24 18:59:35.363913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.960 [2024-07-24 18:59:35.363928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.960 [2024-07-24 18:59:35.364025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.960 [2024-07-24 18:59:35.364048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.524 [2024-07-24 18:59:36.206232] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.524 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.782 [2024-07-24 18:59:36.223232] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.782 NULL1 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.782 Delay0 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1549343 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:30.782 18:59:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:30.782 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.782 [2024-07-24 18:59:36.307374] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:32.679 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:32.679 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.680 18:59:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 [2024-07-24 18:59:38.612700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1add3e0 is same with the state(5) to be set 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Write completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.938 starting I/O failed: -6 00:07:32.938 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 starting I/O failed: -6 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 starting I/O failed: -6 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 starting I/O failed: -6 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 starting I/O failed: -6 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 starting I/O failed: -6 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 starting I/O failed: -6 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 [2024-07-24 18:59:38.613588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f73e8000c00 is same with the state(5) to be set 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 [2024-07-24 18:59:38.614250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1add8f0 is same with the state(5) to be set 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:32.939 Write completed with error (sct=0, sc=8) 00:07:32.939 Read completed with error (sct=0, sc=8) 00:07:34.312 [2024-07-24 18:59:39.575275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adeac0 is same with the state(5) to be set 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 [2024-07-24 18:59:39.617459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f73e800d660 is same with the state(5) to be set 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Read completed with error (sct=0, sc=8) 00:07:34.312 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 [2024-07-24 18:59:39.617750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f73e800d000 is same with the state(5) to be set 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 [2024-07-24 18:59:39.618223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1add5c0 is same with the state(5) to be set 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Write completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 Read completed with error (sct=0, sc=8) 00:07:34.313 [2024-07-24 18:59:39.618494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1addc20 is same with the state(5) to be set 00:07:34.313 Initializing NVMe Controllers 00:07:34.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:34.313 Controller IO queue size 128, less than required. 00:07:34.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:34.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:34.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:34.313 Initialization complete. Launching workers. 00:07:34.313 ======================================================== 00:07:34.313 Latency(us) 00:07:34.313 Device Information : IOPS MiB/s Average min max 00:07:34.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.49 0.08 919958.73 1576.31 2003138.24 00:07:34.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.95 0.08 921101.09 753.19 2005569.03 00:07:34.313 ======================================================== 00:07:34.313 Total : 327.45 0.16 920537.68 753.19 2005569.03 00:07:34.313 00:07:34.313 [2024-07-24 18:59:39.619504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adeac0 (9): Bad file descriptor 00:07:34.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:34.313 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.313 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:34.313 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1549343 00:07:34.313 18:59:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1549343 00:07:34.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1549343) - No such process 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1549343 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1549343 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1549343 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.572 [2024-07-24 18:59:40.142943] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1549806 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:34.572 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.572 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.572 [2024-07-24 18:59:40.211816] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:35.137 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.137 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:35.137 18:59:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.703 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.703 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:35.703 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.268 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.268 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:36.268 18:59:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:36.526 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.526 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:36.526 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.092 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.092 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:37.092 18:59:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.656 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:37.656 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:37.656 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.913 Initializing NVMe Controllers 00:07:37.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:37.913 Controller IO queue size 128, less than required. 00:07:37.913 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:37.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:37.913 Initialization complete. Launching workers. 00:07:37.913 ======================================================== 00:07:37.913 Latency(us) 00:07:37.913 Device Information : IOPS MiB/s Average min max 00:07:37.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005192.46 1000309.23 1042815.99 00:07:37.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006526.97 1000362.31 1042032.08 00:07:37.913 ======================================================== 00:07:37.913 Total : 256.00 0.12 1005859.72 1000309.23 1042815.99 00:07:37.913 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1549806 00:07:38.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1549806) - No such process 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1549806 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:38.169 rmmod nvme_tcp 00:07:38.169 rmmod nvme_fabrics 00:07:38.169 rmmod nvme_keyring 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1549126 ']' 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1549126 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1549126 ']' 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1549126 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1549126 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1549126' 00:07:38.169 killing process with pid 1549126 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1549126 00:07:38.169 18:59:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1549126 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.736 18:59:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.639 00:07:40.639 real 0m14.236s 00:07:40.639 user 0m30.480s 00:07:40.639 sys 0m3.721s 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.639 ************************************ 00:07:40.639 END TEST nvmf_delete_subsystem 00:07:40.639 ************************************ 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.639 ************************************ 00:07:40.639 START TEST nvmf_host_management 00:07:40.639 ************************************ 00:07:40.639 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:40.899 * Looking for test storage... 00:07:40.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:40.899 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.900 18:59:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:44.187 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.187 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:44.188 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:44.188 Found net devices under 0000:84:00.0: cvl_0_0 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:44.188 Found net devices under 0000:84:00.1: cvl_0_1 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:44.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:07:44.188 00:07:44.188 --- 10.0.0.2 ping statistics --- 00:07:44.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.188 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:07:44.188 00:07:44.188 --- 10.0.0.1 ping statistics --- 00:07:44.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.188 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1552292 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1552292 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1552292 ']' 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.188 18:59:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:44.188 [2024-07-24 18:59:49.480414] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:07:44.188 [2024-07-24 18:59:49.480518] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.189 [2024-07-24 18:59:49.567063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.189 [2024-07-24 18:59:49.707960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.189 [2024-07-24 18:59:49.708032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.189 [2024-07-24 18:59:49.708052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.189 [2024-07-24 18:59:49.708068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.189 [2024-07-24 18:59:49.708082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.189 [2024-07-24 18:59:49.708209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.189 [2024-07-24 18:59:49.708307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.189 [2024-07-24 18:59:49.708416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:44.189 [2024-07-24 18:59:49.708420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.123 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 [2024-07-24 18:59:50.575166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 Malloc0 00:07:45.124 [2024-07-24 18:59:50.639776] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1552474 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1552474 /var/tmp/bdevperf.sock 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1552474 ']' 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:45.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:45.124 { 00:07:45.124 "params": { 00:07:45.124 "name": "Nvme$subsystem", 00:07:45.124 "trtype": "$TEST_TRANSPORT", 00:07:45.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.124 "adrfam": "ipv4", 00:07:45.124 "trsvcid": "$NVMF_PORT", 00:07:45.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.124 "hdgst": ${hdgst:-false}, 00:07:45.124 "ddgst": ${ddgst:-false} 00:07:45.124 }, 00:07:45.124 "method": "bdev_nvme_attach_controller" 00:07:45.124 } 00:07:45.124 EOF 00:07:45.124 )") 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:45.124 18:59:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:45.124 "params": { 00:07:45.124 "name": "Nvme0", 00:07:45.124 "trtype": "tcp", 00:07:45.124 "traddr": "10.0.0.2", 00:07:45.124 "adrfam": "ipv4", 00:07:45.124 "trsvcid": "4420", 00:07:45.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:45.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:45.124 "hdgst": false, 00:07:45.124 "ddgst": false 00:07:45.124 }, 00:07:45.124 "method": "bdev_nvme_attach_controller" 00:07:45.124 }' 00:07:45.124 [2024-07-24 18:59:50.726631] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:07:45.124 [2024-07-24 18:59:50.726733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552474 ] 00:07:45.124 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.124 [2024-07-24 18:59:50.808058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.382 [2024-07-24 18:59:50.947923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.641 Running I/O for 10 seconds... 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:45.641 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:45.899 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:45.899 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:45.899 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:45.899 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:45.899 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.899 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:45.899 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.161 [2024-07-24 18:59:51.627274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.627822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18883c0 is same with the state(5) to be set 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.161 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.161 [2024-07-24 18:59:51.635899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.161 [2024-07-24 18:59:51.635953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.635977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.161 [2024-07-24 18:59:51.635996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.636014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.161 [2024-07-24 18:59:51.636032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.636051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:46.161 [2024-07-24 18:59:51.636069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.636087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dea540 is same with the state(5) to be set 00:07:46.161 [2024-07-24 18:59:51.636718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.161 [2024-07-24 18:59:51.636767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.636802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.161 [2024-07-24 18:59:51.636824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.636845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.161 [2024-07-24 18:59:51.636864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.636884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.161 [2024-07-24 18:59:51.636903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.161 [2024-07-24 18:59:51.636924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.161 [2024-07-24 18:59:51.636951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.636973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.636993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.637966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.637984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.162 [2024-07-24 18:59:51.638384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.162 [2024-07-24 18:59:51.638404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.638966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.638985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 [2024-07-24 18:59:51.639409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:46.163 [2024-07-24 18:59:51.639436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:46.163 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.163 [2024-07-24 18:59:51.639570] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21fad70 was disconnected and freed. reset controller. 00:07:46.163 18:59:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:46.163 [2024-07-24 18:59:51.641116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:46.163 task offset: 65536 on job bdev=Nvme0n1 fails 00:07:46.163 00:07:46.163 Latency(us) 00:07:46.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.163 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:46.163 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:46.163 Verification LBA range: start 0x0 length 0x400 00:07:46.163 Nvme0n1 : 0.44 1152.48 72.03 144.06 0.00 47702.75 4271.98 47380.10 00:07:46.163 =================================================================================================================== 00:07:46.163 Total : 1152.48 72.03 144.06 0.00 47702.75 4271.98 47380.10 00:07:46.163 [2024-07-24 18:59:51.643682] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.163 [2024-07-24 18:59:51.643732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dea540 (9): Bad file descriptor 00:07:46.163 [2024-07-24 18:59:51.705681] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1552474 00:07:47.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1552474) - No such process 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:47.136 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:47.136 { 00:07:47.136 "params": { 00:07:47.136 "name": "Nvme$subsystem", 00:07:47.136 "trtype": "$TEST_TRANSPORT", 00:07:47.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.136 "adrfam": "ipv4", 00:07:47.137 "trsvcid": "$NVMF_PORT", 00:07:47.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.137 "hdgst": ${hdgst:-false}, 00:07:47.137 "ddgst": ${ddgst:-false} 00:07:47.137 }, 00:07:47.137 "method": "bdev_nvme_attach_controller" 00:07:47.137 } 00:07:47.137 EOF 00:07:47.137 )") 00:07:47.137 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:47.137 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:47.137 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:47.137 18:59:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:47.137 "params": { 00:07:47.137 "name": "Nvme0", 00:07:47.137 "trtype": "tcp", 00:07:47.137 "traddr": "10.0.0.2", 00:07:47.137 "adrfam": "ipv4", 00:07:47.137 "trsvcid": "4420", 00:07:47.137 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.137 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.137 "hdgst": false, 00:07:47.137 "ddgst": false 00:07:47.137 }, 00:07:47.137 "method": "bdev_nvme_attach_controller" 00:07:47.137 }' 00:07:47.137 [2024-07-24 18:59:52.722105] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:07:47.137 [2024-07-24 18:59:52.722278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552753 ] 00:07:47.137 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.394 [2024-07-24 18:59:52.839614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.394 [2024-07-24 18:59:52.980689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.960 Running I/O for 1 seconds... 00:07:48.894 00:07:48.894 Latency(us) 00:07:48.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.894 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.894 Verification LBA range: start 0x0 length 0x400 00:07:48.894 Nvme0n1 : 1.04 1233.95 77.12 0.00 0.00 50878.52 8301.23 44855.75 00:07:48.894 =================================================================================================================== 00:07:48.894 Total : 1233.95 77.12 0.00 0.00 50878.52 8301.23 44855.75 00:07:49.152 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.153 rmmod nvme_tcp 00:07:49.153 rmmod nvme_fabrics 00:07:49.153 rmmod nvme_keyring 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1552292 ']' 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1552292 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1552292 ']' 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1552292 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1552292 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1552292' 00:07:49.153 killing process with pid 1552292 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1552292 00:07:49.153 18:59:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1552292 00:07:49.720 [2024-07-24 18:59:55.147388] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:49.720 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.720 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.720 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.720 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.720 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.720 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.721 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.721 18:59:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:51.622 00:07:51.622 real 0m10.954s 00:07:51.622 user 0m25.661s 00:07:51.622 sys 0m3.683s 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.622 ************************************ 00:07:51.622 END TEST nvmf_host_management 00:07:51.622 ************************************ 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.622 18:59:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.881 ************************************ 00:07:51.881 START TEST nvmf_lvol 00:07:51.882 ************************************ 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:51.882 * Looking for test storage... 00:07:51.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.882 18:59:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:54.413 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:54.414 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:54.414 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:54.414 Found net devices under 0000:84:00.0: cvl_0_0 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:54.414 Found net devices under 0000:84:00.1: cvl_0_1 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.414 18:59:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.414 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.414 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.414 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.414 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.414 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.414 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:07:54.673 00:07:54.673 --- 10.0.0.2 ping statistics --- 00:07:54.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.673 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:07:54.673 00:07:54.673 --- 10.0.0.1 ping statistics --- 00:07:54.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.673 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1554994 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1554994 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1554994 ']' 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.673 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.673 [2024-07-24 19:00:00.264296] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:07:54.673 [2024-07-24 19:00:00.264499] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.673 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.931 [2024-07-24 19:00:00.416695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.931 [2024-07-24 19:00:00.618622] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.931 [2024-07-24 19:00:00.618672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.931 [2024-07-24 19:00:00.618692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.931 [2024-07-24 19:00:00.618719] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.931 [2024-07-24 19:00:00.618754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.931 [2024-07-24 19:00:00.618938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.931 [2024-07-24 19:00:00.618995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.931 [2024-07-24 19:00:00.619000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.189 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.189 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:55.189 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:55.189 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.189 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:55.189 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.189 19:00:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.446 [2024-07-24 19:00:01.101264] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.446 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:56.011 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:56.011 19:00:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:56.576 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:56.576 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:56.831 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:57.395 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0fc902b3-8b3b-4be5-b17c-82ea734b0d62 00:07:57.395 19:00:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0fc902b3-8b3b-4be5-b17c-82ea734b0d62 lvol 20 00:07:57.652 19:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=876664ed-3f66-4edc-bac7-aab20de59bfe 00:07:57.652 19:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.908 19:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 876664ed-3f66-4edc-bac7-aab20de59bfe 00:07:58.166 19:00:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:58.424 [2024-07-24 19:00:04.095617] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.681 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.945 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1555660 00:07:58.946 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:58.946 19:00:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:58.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.885 19:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 876664ed-3f66-4edc-bac7-aab20de59bfe MY_SNAPSHOT 00:08:00.448 19:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=89f42994-85f2-4823-bf19-51a0a011ab7e 00:08:00.448 19:00:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 876664ed-3f66-4edc-bac7-aab20de59bfe 30 00:08:01.014 19:00:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 89f42994-85f2-4823-bf19-51a0a011ab7e MY_CLONE 00:08:01.625 19:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=3c0d84ff-5bb3-48fb-ac09-1e6c7c86fa5e 00:08:01.625 19:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 3c0d84ff-5bb3-48fb-ac09-1e6c7c86fa5e 00:08:02.574 19:00:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1555660 00:08:10.683 Initializing NVMe Controllers 00:08:10.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:10.683 Controller IO queue size 128, less than required. 00:08:10.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:10.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:10.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:10.683 Initialization complete. Launching workers. 00:08:10.683 ======================================================== 00:08:10.684 Latency(us) 00:08:10.684 Device Information : IOPS MiB/s Average min max 00:08:10.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 7915.10 30.92 16184.59 1841.94 108149.79 00:08:10.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 7787.00 30.42 16451.01 2840.93 103891.84 00:08:10.684 ======================================================== 00:08:10.684 Total : 15702.10 61.34 16316.71 1841.94 108149.79 00:08:10.684 00:08:10.684 19:00:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 876664ed-3f66-4edc-bac7-aab20de59bfe 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0fc902b3-8b3b-4be5-b17c-82ea734b0d62 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:10.684 rmmod nvme_tcp 00:08:10.684 rmmod nvme_fabrics 00:08:10.684 rmmod nvme_keyring 00:08:10.684 19:00:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1554994 ']' 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1554994 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1554994 ']' 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1554994 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1554994 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1554994' 00:08:10.684 killing process with pid 1554994 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1554994 00:08:10.684 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1554994 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.942 19:00:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:13.472 00:08:13.472 real 0m21.247s 00:08:13.472 user 1m10.957s 00:08:13.472 sys 0m6.551s 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:13.472 ************************************ 00:08:13.472 END TEST nvmf_lvol 00:08:13.472 ************************************ 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.472 ************************************ 00:08:13.472 START TEST nvmf_lvs_grow 00:08:13.472 ************************************ 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:13.472 * Looking for test storage... 00:08:13.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.472 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.473 19:00:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:16.754 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:16.754 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.754 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:16.755 Found net devices under 0000:84:00.0: cvl_0_0 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:16.755 Found net devices under 0000:84:00.1: cvl_0_1 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:08:16.755 00:08:16.755 --- 10.0.0.2 ping statistics --- 00:08:16.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.755 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:08:16.755 00:08:16.755 --- 10.0.0.1 ping statistics --- 00:08:16.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.755 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1559581 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1559581 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1559581 ']' 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:16.755 19:00:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.755 [2024-07-24 19:00:21.996960] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:08:16.755 [2024-07-24 19:00:21.997068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.755 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.755 [2024-07-24 19:00:22.118256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.755 [2024-07-24 19:00:22.322098] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.755 [2024-07-24 19:00:22.322216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.755 [2024-07-24 19:00:22.322252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.755 [2024-07-24 19:00:22.322281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.755 [2024-07-24 19:00:22.322306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.755 [2024-07-24 19:00:22.322382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.693 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:17.693 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:17.693 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.693 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:17.693 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.952 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.952 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:18.210 [2024-07-24 19:00:23.893446] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.468 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:18.468 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.469 ************************************ 00:08:18.469 START TEST lvs_grow_clean 00:08:18.469 ************************************ 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.469 19:00:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:18.727 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:18.727 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:19.294 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:19.294 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:19.294 19:00:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:19.861 19:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:19.861 19:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:19.861 19:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f3aab32e-157b-4918-8f21-0326b3e37ae4 lvol 150 00:08:20.427 19:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a70f7c3e-db60-4fa0-ba12-6868cea4ce4d 00:08:20.427 19:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:20.427 19:00:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:20.684 [2024-07-24 19:00:26.252606] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:20.685 [2024-07-24 19:00:26.252742] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:20.685 true 00:08:20.685 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:20.685 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:21.250 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:21.250 19:00:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:21.814 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a70f7c3e-db60-4fa0-ba12-6868cea4ce4d 00:08:22.392 19:00:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:22.679 [2024-07-24 19:00:28.130829] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.679 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1560432 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1560432 /var/tmp/bdevperf.sock 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1560432 ']' 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.937 19:00:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:22.937 [2024-07-24 19:00:28.549693] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:08:22.937 [2024-07-24 19:00:28.549859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560432 ] 00:08:22.937 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.195 [2024-07-24 19:00:28.656867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.195 [2024-07-24 19:00:28.799072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.453 19:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.453 19:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:23.453 19:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:24.018 Nvme0n1 00:08:24.018 19:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:24.275 [ 00:08:24.275 { 00:08:24.275 "name": "Nvme0n1", 00:08:24.275 "aliases": [ 00:08:24.275 "a70f7c3e-db60-4fa0-ba12-6868cea4ce4d" 00:08:24.275 ], 00:08:24.275 "product_name": "NVMe disk", 00:08:24.276 "block_size": 4096, 00:08:24.276 "num_blocks": 38912, 00:08:24.276 "uuid": "a70f7c3e-db60-4fa0-ba12-6868cea4ce4d", 00:08:24.276 "assigned_rate_limits": { 00:08:24.276 "rw_ios_per_sec": 0, 00:08:24.276 "rw_mbytes_per_sec": 0, 00:08:24.276 "r_mbytes_per_sec": 0, 00:08:24.276 "w_mbytes_per_sec": 0 00:08:24.276 }, 00:08:24.276 "claimed": false, 00:08:24.276 "zoned": false, 00:08:24.276 "supported_io_types": { 00:08:24.276 "read": true, 00:08:24.276 "write": true, 00:08:24.276 "unmap": true, 00:08:24.276 "flush": true, 00:08:24.276 "reset": true, 00:08:24.276 "nvme_admin": true, 00:08:24.276 "nvme_io": true, 00:08:24.276 "nvme_io_md": false, 00:08:24.276 "write_zeroes": true, 00:08:24.276 "zcopy": false, 00:08:24.276 "get_zone_info": false, 00:08:24.276 "zone_management": false, 00:08:24.276 "zone_append": false, 00:08:24.276 "compare": true, 00:08:24.276 "compare_and_write": true, 00:08:24.276 "abort": true, 00:08:24.276 "seek_hole": false, 00:08:24.276 "seek_data": false, 00:08:24.276 "copy": true, 00:08:24.276 "nvme_iov_md": false 00:08:24.276 }, 00:08:24.276 "memory_domains": [ 00:08:24.276 { 00:08:24.276 "dma_device_id": "system", 00:08:24.276 "dma_device_type": 1 00:08:24.276 } 00:08:24.276 ], 00:08:24.276 "driver_specific": { 00:08:24.276 "nvme": [ 00:08:24.276 { 00:08:24.276 "trid": { 00:08:24.276 "trtype": "TCP", 00:08:24.276 "adrfam": "IPv4", 00:08:24.276 "traddr": "10.0.0.2", 00:08:24.276 "trsvcid": "4420", 00:08:24.276 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:24.276 }, 00:08:24.276 "ctrlr_data": { 00:08:24.276 "cntlid": 1, 00:08:24.276 "vendor_id": "0x8086", 00:08:24.276 "model_number": "SPDK bdev Controller", 00:08:24.276 "serial_number": "SPDK0", 00:08:24.276 "firmware_revision": "24.09", 00:08:24.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:24.276 "oacs": { 00:08:24.276 "security": 0, 00:08:24.276 "format": 0, 00:08:24.276 "firmware": 0, 00:08:24.276 "ns_manage": 0 00:08:24.276 }, 00:08:24.276 "multi_ctrlr": true, 00:08:24.276 "ana_reporting": false 00:08:24.276 }, 00:08:24.276 "vs": { 00:08:24.276 "nvme_version": "1.3" 00:08:24.276 }, 00:08:24.276 "ns_data": { 00:08:24.276 "id": 1, 00:08:24.276 "can_share": true 00:08:24.276 } 00:08:24.276 } 00:08:24.276 ], 00:08:24.276 "mp_policy": "active_passive" 00:08:24.276 } 00:08:24.276 } 00:08:24.276 ] 00:08:24.276 19:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1560575 00:08:24.276 19:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:24.276 19:00:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:24.534 Running I/O for 10 seconds... 00:08:25.467 Latency(us) 00:08:25.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.467 Nvme0n1 : 1.00 11431.00 44.65 0.00 0.00 0.00 0.00 0.00 00:08:25.467 =================================================================================================================== 00:08:25.467 Total : 11431.00 44.65 0.00 0.00 0.00 0.00 0.00 00:08:25.467 00:08:26.411 19:00:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:26.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.411 Nvme0n1 : 2.00 11593.00 45.29 0.00 0.00 0.00 0.00 0.00 00:08:26.411 =================================================================================================================== 00:08:26.411 Total : 11593.00 45.29 0.00 0.00 0.00 0.00 0.00 00:08:26.411 00:08:26.979 true 00:08:26.979 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:26.979 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:27.238 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:27.238 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:27.238 19:00:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1560575 00:08:27.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.496 Nvme0n1 : 3.00 11623.33 45.40 0.00 0.00 0.00 0.00 0.00 00:08:27.496 =================================================================================================================== 00:08:27.496 Total : 11623.33 45.40 0.00 0.00 0.00 0.00 0.00 00:08:27.496 00:08:28.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.430 Nvme0n1 : 4.00 11686.50 45.65 0.00 0.00 0.00 0.00 0.00 00:08:28.430 =================================================================================================================== 00:08:28.430 Total : 11686.50 45.65 0.00 0.00 0.00 0.00 0.00 00:08:28.430 00:08:29.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.362 Nvme0n1 : 5.00 11736.80 45.85 0.00 0.00 0.00 0.00 0.00 00:08:29.362 =================================================================================================================== 00:08:29.362 Total : 11736.80 45.85 0.00 0.00 0.00 0.00 0.00 00:08:29.362 00:08:30.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.733 Nvme0n1 : 6.00 11771.83 45.98 0.00 0.00 0.00 0.00 0.00 00:08:30.733 =================================================================================================================== 00:08:30.733 Total : 11771.83 45.98 0.00 0.00 0.00 0.00 0.00 00:08:30.733 00:08:31.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.666 Nvme0n1 : 7.00 11795.57 46.08 0.00 0.00 0.00 0.00 0.00 00:08:31.666 =================================================================================================================== 00:08:31.666 Total : 11795.57 46.08 0.00 0.00 0.00 0.00 0.00 00:08:31.666 00:08:32.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.611 Nvme0n1 : 8.00 11829.25 46.21 0.00 0.00 0.00 0.00 0.00 00:08:32.611 =================================================================================================================== 00:08:32.611 Total : 11829.25 46.21 0.00 0.00 0.00 0.00 0.00 00:08:32.611 00:08:33.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.547 Nvme0n1 : 9.00 11855.78 46.31 0.00 0.00 0.00 0.00 0.00 00:08:33.547 =================================================================================================================== 00:08:33.547 Total : 11855.78 46.31 0.00 0.00 0.00 0.00 0.00 00:08:33.547 00:08:34.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.479 Nvme0n1 : 10.00 11870.80 46.37 0.00 0.00 0.00 0.00 0.00 00:08:34.479 =================================================================================================================== 00:08:34.479 Total : 11870.80 46.37 0.00 0.00 0.00 0.00 0.00 00:08:34.479 00:08:34.479 00:08:34.479 Latency(us) 00:08:34.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.479 Nvme0n1 : 10.00 11878.71 46.40 0.00 0.00 10769.79 2827.76 22136.60 00:08:34.479 =================================================================================================================== 00:08:34.479 Total : 11878.71 46.40 0.00 0.00 10769.79 2827.76 22136.60 00:08:34.479 0 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1560432 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1560432 ']' 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1560432 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1560432 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1560432' 00:08:34.479 killing process with pid 1560432 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1560432 00:08:34.479 Received shutdown signal, test time was about 10.000000 seconds 00:08:34.479 00:08:34.479 Latency(us) 00:08:34.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.479 =================================================================================================================== 00:08:34.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:34.479 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1560432 00:08:35.051 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.311 19:00:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.569 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:35.569 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:36.135 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:36.135 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:36.135 19:00:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.393 [2024-07-24 19:00:41.950809] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:36.393 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:36.651 request: 00:08:36.651 { 00:08:36.651 "uuid": "f3aab32e-157b-4918-8f21-0326b3e37ae4", 00:08:36.651 "method": "bdev_lvol_get_lvstores", 00:08:36.651 "req_id": 1 00:08:36.651 } 00:08:36.651 Got JSON-RPC error response 00:08:36.651 response: 00:08:36.651 { 00:08:36.651 "code": -19, 00:08:36.651 "message": "No such device" 00:08:36.651 } 00:08:36.651 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:36.651 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.651 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.651 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.651 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.254 aio_bdev 00:08:37.254 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a70f7c3e-db60-4fa0-ba12-6868cea4ce4d 00:08:37.254 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=a70f7c3e-db60-4fa0-ba12-6868cea4ce4d 00:08:37.254 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.254 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:37.254 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.254 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.254 19:00:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:37.511 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a70f7c3e-db60-4fa0-ba12-6868cea4ce4d -t 2000 00:08:37.769 [ 00:08:37.769 { 00:08:37.769 "name": "a70f7c3e-db60-4fa0-ba12-6868cea4ce4d", 00:08:37.769 "aliases": [ 00:08:37.769 "lvs/lvol" 00:08:37.769 ], 00:08:37.769 "product_name": "Logical Volume", 00:08:37.769 "block_size": 4096, 00:08:37.769 "num_blocks": 38912, 00:08:37.769 "uuid": "a70f7c3e-db60-4fa0-ba12-6868cea4ce4d", 00:08:37.769 "assigned_rate_limits": { 00:08:37.769 "rw_ios_per_sec": 0, 00:08:37.769 "rw_mbytes_per_sec": 0, 00:08:37.769 "r_mbytes_per_sec": 0, 00:08:37.769 "w_mbytes_per_sec": 0 00:08:37.769 }, 00:08:37.769 "claimed": false, 00:08:37.769 "zoned": false, 00:08:37.769 "supported_io_types": { 00:08:37.769 "read": true, 00:08:37.769 "write": true, 00:08:37.769 "unmap": true, 00:08:37.769 "flush": false, 00:08:37.769 "reset": true, 00:08:37.769 "nvme_admin": false, 00:08:37.769 "nvme_io": false, 00:08:37.769 "nvme_io_md": false, 00:08:37.769 "write_zeroes": true, 00:08:37.769 "zcopy": false, 00:08:37.769 "get_zone_info": false, 00:08:37.769 "zone_management": false, 00:08:37.769 "zone_append": false, 00:08:37.769 "compare": false, 00:08:37.769 "compare_and_write": false, 00:08:37.769 "abort": false, 00:08:37.769 "seek_hole": true, 00:08:37.769 "seek_data": true, 00:08:37.769 "copy": false, 00:08:37.769 "nvme_iov_md": false 00:08:37.769 }, 00:08:37.769 "driver_specific": { 00:08:37.769 "lvol": { 00:08:37.769 "lvol_store_uuid": "f3aab32e-157b-4918-8f21-0326b3e37ae4", 00:08:37.769 "base_bdev": "aio_bdev", 00:08:37.769 "thin_provision": false, 00:08:37.769 "num_allocated_clusters": 38, 00:08:37.769 "snapshot": false, 00:08:37.769 "clone": false, 00:08:37.769 "esnap_clone": false 00:08:37.769 } 00:08:37.769 } 00:08:37.769 } 00:08:37.769 ] 00:08:37.769 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:37.769 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:37.769 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:38.346 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:38.346 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:38.346 19:00:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:38.911 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:38.911 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a70f7c3e-db60-4fa0-ba12-6868cea4ce4d 00:08:39.478 19:00:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f3aab32e-157b-4918-8f21-0326b3e37ae4 00:08:39.736 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.994 00:08:39.994 real 0m21.652s 00:08:39.994 user 0m21.143s 00:08:39.994 sys 0m2.594s 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:39.994 ************************************ 00:08:39.994 END TEST lvs_grow_clean 00:08:39.994 ************************************ 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.994 ************************************ 00:08:39.994 START TEST lvs_grow_dirty 00:08:39.994 ************************************ 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.994 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.274 19:00:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.535 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.535 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.793 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc492a0a-52b2-4b4f-a48b-722341752700 00:08:40.793 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:08:40.793 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:41.358 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:41.358 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:41.358 19:00:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc492a0a-52b2-4b4f-a48b-722341752700 lvol 150 00:08:41.924 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ae44993c-f032-46d7-8f34-35d3a550e0f0 00:08:41.924 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.924 19:00:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:42.489 [2024-07-24 19:00:48.016482] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:42.489 [2024-07-24 19:00:48.016604] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:42.489 true 00:08:42.489 19:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:08:42.489 19:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:43.053 19:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:43.053 19:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:43.312 19:00:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ae44993c-f032-46d7-8f34-35d3a550e0f0 00:08:43.569 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:44.148 [2024-07-24 19:00:49.791065] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.148 19:00:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1562996 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1562996 /var/tmp/bdevperf.sock 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1562996 ']' 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.712 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.712 [2024-07-24 19:00:50.312125] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:08:44.712 [2024-07-24 19:00:50.312227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1562996 ] 00:08:44.713 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.713 [2024-07-24 19:00:50.394468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.970 [2024-07-24 19:00:50.534486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.227 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.227 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:45.227 19:00:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:45.484 Nvme0n1 00:08:45.484 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:45.742 [ 00:08:45.742 { 00:08:45.742 "name": "Nvme0n1", 00:08:45.742 "aliases": [ 00:08:45.742 "ae44993c-f032-46d7-8f34-35d3a550e0f0" 00:08:45.742 ], 00:08:45.742 "product_name": "NVMe disk", 00:08:45.742 "block_size": 4096, 00:08:45.742 "num_blocks": 38912, 00:08:45.742 "uuid": "ae44993c-f032-46d7-8f34-35d3a550e0f0", 00:08:45.742 "assigned_rate_limits": { 00:08:45.742 "rw_ios_per_sec": 0, 00:08:45.742 "rw_mbytes_per_sec": 0, 00:08:45.742 "r_mbytes_per_sec": 0, 00:08:45.742 "w_mbytes_per_sec": 0 00:08:45.742 }, 00:08:45.742 "claimed": false, 00:08:45.742 "zoned": false, 00:08:45.742 "supported_io_types": { 00:08:45.742 "read": true, 00:08:45.742 "write": true, 00:08:45.742 "unmap": true, 00:08:45.742 "flush": true, 00:08:45.742 "reset": true, 00:08:45.742 "nvme_admin": true, 00:08:45.742 "nvme_io": true, 00:08:45.742 "nvme_io_md": false, 00:08:45.742 "write_zeroes": true, 00:08:45.742 "zcopy": false, 00:08:45.742 "get_zone_info": false, 00:08:45.742 "zone_management": false, 00:08:45.742 "zone_append": false, 00:08:45.742 "compare": true, 00:08:45.742 "compare_and_write": true, 00:08:45.742 "abort": true, 00:08:45.742 "seek_hole": false, 00:08:45.742 "seek_data": false, 00:08:45.742 "copy": true, 00:08:45.742 "nvme_iov_md": false 00:08:45.742 }, 00:08:45.742 "memory_domains": [ 00:08:45.742 { 00:08:45.742 "dma_device_id": "system", 00:08:45.742 "dma_device_type": 1 00:08:45.742 } 00:08:45.742 ], 00:08:45.742 "driver_specific": { 00:08:45.742 "nvme": [ 00:08:45.742 { 00:08:45.742 "trid": { 00:08:45.742 "trtype": "TCP", 00:08:45.742 "adrfam": "IPv4", 00:08:45.742 "traddr": "10.0.0.2", 00:08:45.742 "trsvcid": "4420", 00:08:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:45.742 }, 00:08:45.742 "ctrlr_data": { 00:08:45.742 "cntlid": 1, 00:08:45.742 "vendor_id": "0x8086", 00:08:45.742 "model_number": "SPDK bdev Controller", 00:08:45.742 "serial_number": "SPDK0", 00:08:45.742 "firmware_revision": "24.09", 00:08:45.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.742 "oacs": { 00:08:45.742 "security": 0, 00:08:45.742 "format": 0, 00:08:45.742 "firmware": 0, 00:08:45.742 "ns_manage": 0 00:08:45.742 }, 00:08:45.742 "multi_ctrlr": true, 00:08:45.742 "ana_reporting": false 00:08:45.742 }, 00:08:45.742 "vs": { 00:08:45.742 "nvme_version": "1.3" 00:08:45.742 }, 00:08:45.742 "ns_data": { 00:08:45.742 "id": 1, 00:08:45.742 "can_share": true 00:08:45.742 } 00:08:45.742 } 00:08:45.742 ], 00:08:45.742 "mp_policy": "active_passive" 00:08:45.742 } 00:08:45.742 } 00:08:45.742 ] 00:08:45.742 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1563150 00:08:45.742 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:45.742 19:00:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:46.000 Running I/O for 10 seconds... 00:08:46.931 Latency(us) 00:08:46.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:46.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.931 Nvme0n1 : 1.00 11496.00 44.91 0.00 0.00 0.00 0.00 0.00 00:08:46.931 =================================================================================================================== 00:08:46.931 Total : 11496.00 44.91 0.00 0.00 0.00 0.00 0.00 00:08:46.931 00:08:47.864 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc492a0a-52b2-4b4f-a48b-722341752700 00:08:47.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.864 Nvme0n1 : 2.00 11590.00 45.27 0.00 0.00 0.00 0.00 0.00 00:08:47.864 =================================================================================================================== 00:08:47.864 Total : 11590.00 45.27 0.00 0.00 0.00 0.00 0.00 00:08:47.864 00:08:48.121 true 00:08:48.121 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:08:48.121 19:00:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:48.685 19:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:48.685 19:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:48.685 19:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1563150 00:08:48.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.943 Nvme0n1 : 3.00 11663.67 45.56 0.00 0.00 0.00 0.00 0.00 00:08:48.943 =================================================================================================================== 00:08:48.943 Total : 11663.67 45.56 0.00 0.00 0.00 0.00 0.00 00:08:48.943 00:08:49.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.888 Nvme0n1 : 4.00 11701.25 45.71 0.00 0.00 0.00 0.00 0.00 00:08:49.888 =================================================================================================================== 00:08:49.888 Total : 11701.25 45.71 0.00 0.00 0.00 0.00 0.00 00:08:49.888 00:08:51.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.325 Nvme0n1 : 5.00 11736.20 45.84 0.00 0.00 0.00 0.00 0.00 00:08:51.325 =================================================================================================================== 00:08:51.325 Total : 11736.20 45.84 0.00 0.00 0.00 0.00 0.00 00:08:51.325 00:08:51.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.898 Nvme0n1 : 6.00 11750.17 45.90 0.00 0.00 0.00 0.00 0.00 00:08:51.898 =================================================================================================================== 00:08:51.898 Total : 11750.17 45.90 0.00 0.00 0.00 0.00 0.00 00:08:51.898 00:08:53.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.271 Nvme0n1 : 7.00 11786.29 46.04 0.00 0.00 0.00 0.00 0.00 00:08:53.271 =================================================================================================================== 00:08:53.271 Total : 11786.29 46.04 0.00 0.00 0.00 0.00 0.00 00:08:53.271 00:08:54.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.204 Nvme0n1 : 8.00 11797.50 46.08 0.00 0.00 0.00 0.00 0.00 00:08:54.204 =================================================================================================================== 00:08:54.204 Total : 11797.50 46.08 0.00 0.00 0.00 0.00 0.00 00:08:54.204 00:08:55.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.138 Nvme0n1 : 9.00 11813.11 46.14 0.00 0.00 0.00 0.00 0.00 00:08:55.138 =================================================================================================================== 00:08:55.138 Total : 11813.11 46.14 0.00 0.00 0.00 0.00 0.00 00:08:55.138 00:08:56.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.072 Nvme0n1 : 10.00 11838.30 46.24 0.00 0.00 0.00 0.00 0.00 00:08:56.072 =================================================================================================================== 00:08:56.072 Total : 11838.30 46.24 0.00 0.00 0.00 0.00 0.00 00:08:56.072 00:08:56.072 00:08:56.072 Latency(us) 00:08:56.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.072 Nvme0n1 : 10.01 11841.18 46.25 0.00 0.00 10803.57 2779.21 21456.97 00:08:56.072 =================================================================================================================== 00:08:56.072 Total : 11841.18 46.25 0.00 0.00 10803.57 2779.21 21456.97 00:08:56.072 0 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1562996 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1562996 ']' 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1562996 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1562996 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1562996' 00:08:56.073 killing process with pid 1562996 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1562996 00:08:56.073 Received shutdown signal, test time was about 10.000000 seconds 00:08:56.073 00:08:56.073 Latency(us) 00:08:56.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.073 =================================================================================================================== 00:08:56.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:56.073 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1562996 00:08:56.330 19:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:56.896 19:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.155 19:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:08:57.155 19:01:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:57.721 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:57.721 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:57.721 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1559581 00:08:57.721 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1559581 00:08:57.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1559581 Killed "${NVMF_APP[@]}" "$@" 00:08:57.721 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:57.721 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:57.721 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1564491 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1564491 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1564491 ']' 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:57.722 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.722 [2024-07-24 19:01:03.263645] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:08:57.722 [2024-07-24 19:01:03.263747] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.722 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.722 [2024-07-24 19:01:03.354168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.982 [2024-07-24 19:01:03.492389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.982 [2024-07-24 19:01:03.492474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.982 [2024-07-24 19:01:03.492494] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.982 [2024-07-24 19:01:03.492510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.982 [2024-07-24 19:01:03.492524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.982 [2024-07-24 19:01:03.492570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.982 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.982 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:57.982 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:57.982 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.982 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.240 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.240 19:01:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.498 [2024-07-24 19:01:04.090578] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:58.498 [2024-07-24 19:01:04.090855] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:58.498 [2024-07-24 19:01:04.090988] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ae44993c-f032-46d7-8f34-35d3a550e0f0 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ae44993c-f032-46d7-8f34-35d3a550e0f0 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.498 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:59.063 19:01:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ae44993c-f032-46d7-8f34-35d3a550e0f0 -t 2000 00:08:59.630 [ 00:08:59.630 { 00:08:59.630 "name": "ae44993c-f032-46d7-8f34-35d3a550e0f0", 00:08:59.630 "aliases": [ 00:08:59.630 "lvs/lvol" 00:08:59.630 ], 00:08:59.630 "product_name": "Logical Volume", 00:08:59.630 "block_size": 4096, 00:08:59.630 "num_blocks": 38912, 00:08:59.630 "uuid": "ae44993c-f032-46d7-8f34-35d3a550e0f0", 00:08:59.630 "assigned_rate_limits": { 00:08:59.630 "rw_ios_per_sec": 0, 00:08:59.630 "rw_mbytes_per_sec": 0, 00:08:59.630 "r_mbytes_per_sec": 0, 00:08:59.630 "w_mbytes_per_sec": 0 00:08:59.630 }, 00:08:59.630 "claimed": false, 00:08:59.630 "zoned": false, 00:08:59.630 "supported_io_types": { 00:08:59.630 "read": true, 00:08:59.630 "write": true, 00:08:59.630 "unmap": true, 00:08:59.630 "flush": false, 00:08:59.630 "reset": true, 00:08:59.630 "nvme_admin": false, 00:08:59.630 "nvme_io": false, 00:08:59.630 "nvme_io_md": false, 00:08:59.630 "write_zeroes": true, 00:08:59.630 "zcopy": false, 00:08:59.630 "get_zone_info": false, 00:08:59.630 "zone_management": false, 00:08:59.630 "zone_append": false, 00:08:59.630 "compare": false, 00:08:59.630 "compare_and_write": false, 00:08:59.630 "abort": false, 00:08:59.630 "seek_hole": true, 00:08:59.630 "seek_data": true, 00:08:59.630 "copy": false, 00:08:59.630 "nvme_iov_md": false 00:08:59.630 }, 00:08:59.630 "driver_specific": { 00:08:59.630 "lvol": { 00:08:59.630 "lvol_store_uuid": "fc492a0a-52b2-4b4f-a48b-722341752700", 00:08:59.630 "base_bdev": "aio_bdev", 00:08:59.630 "thin_provision": false, 00:08:59.630 "num_allocated_clusters": 38, 00:08:59.630 "snapshot": false, 00:08:59.630 "clone": false, 00:08:59.630 "esnap_clone": false 00:08:59.630 } 00:08:59.630 } 00:08:59.630 } 00:08:59.630 ] 00:08:59.630 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:59.630 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:08:59.630 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:59.888 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:59.888 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:08:59.889 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:00.454 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:00.454 19:01:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:00.721 [2024-07-24 19:01:06.353952] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:00.721 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:09:01.289 request: 00:09:01.289 { 00:09:01.289 "uuid": "fc492a0a-52b2-4b4f-a48b-722341752700", 00:09:01.289 "method": "bdev_lvol_get_lvstores", 00:09:01.289 "req_id": 1 00:09:01.289 } 00:09:01.289 Got JSON-RPC error response 00:09:01.289 response: 00:09:01.289 { 00:09:01.289 "code": -19, 00:09:01.289 "message": "No such device" 00:09:01.289 } 00:09:01.289 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:01.289 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.289 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:01.289 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.289 19:01:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:01.546 aio_bdev 00:09:01.546 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ae44993c-f032-46d7-8f34-35d3a550e0f0 00:09:01.546 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=ae44993c-f032-46d7-8f34-35d3a550e0f0 00:09:01.546 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.546 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:01.546 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.546 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.546 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.113 19:01:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ae44993c-f032-46d7-8f34-35d3a550e0f0 -t 2000 00:09:02.679 [ 00:09:02.679 { 00:09:02.679 "name": "ae44993c-f032-46d7-8f34-35d3a550e0f0", 00:09:02.679 "aliases": [ 00:09:02.679 "lvs/lvol" 00:09:02.679 ], 00:09:02.679 "product_name": "Logical Volume", 00:09:02.679 "block_size": 4096, 00:09:02.679 "num_blocks": 38912, 00:09:02.679 "uuid": "ae44993c-f032-46d7-8f34-35d3a550e0f0", 00:09:02.679 "assigned_rate_limits": { 00:09:02.679 "rw_ios_per_sec": 0, 00:09:02.679 "rw_mbytes_per_sec": 0, 00:09:02.679 "r_mbytes_per_sec": 0, 00:09:02.679 "w_mbytes_per_sec": 0 00:09:02.679 }, 00:09:02.679 "claimed": false, 00:09:02.679 "zoned": false, 00:09:02.679 "supported_io_types": { 00:09:02.679 "read": true, 00:09:02.679 "write": true, 00:09:02.679 "unmap": true, 00:09:02.679 "flush": false, 00:09:02.679 "reset": true, 00:09:02.679 "nvme_admin": false, 00:09:02.679 "nvme_io": false, 00:09:02.679 "nvme_io_md": false, 00:09:02.679 "write_zeroes": true, 00:09:02.679 "zcopy": false, 00:09:02.679 "get_zone_info": false, 00:09:02.679 "zone_management": false, 00:09:02.679 "zone_append": false, 00:09:02.679 "compare": false, 00:09:02.679 "compare_and_write": false, 00:09:02.679 "abort": false, 00:09:02.679 "seek_hole": true, 00:09:02.679 "seek_data": true, 00:09:02.679 "copy": false, 00:09:02.679 "nvme_iov_md": false 00:09:02.679 }, 00:09:02.679 "driver_specific": { 00:09:02.679 "lvol": { 00:09:02.679 "lvol_store_uuid": "fc492a0a-52b2-4b4f-a48b-722341752700", 00:09:02.679 "base_bdev": "aio_bdev", 00:09:02.679 "thin_provision": false, 00:09:02.679 "num_allocated_clusters": 38, 00:09:02.679 "snapshot": false, 00:09:02.679 "clone": false, 00:09:02.679 "esnap_clone": false 00:09:02.679 } 00:09:02.679 } 00:09:02.679 } 00:09:02.679 ] 00:09:02.679 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:02.679 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:09:02.679 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:02.936 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:02.936 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc492a0a-52b2-4b4f-a48b-722341752700 00:09:02.936 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:03.504 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:03.504 19:01:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ae44993c-f032-46d7-8f34-35d3a550e0f0 00:09:03.763 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc492a0a-52b2-4b4f-a48b-722341752700 00:09:04.021 19:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:04.588 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:04.588 00:09:04.588 real 0m24.585s 00:09:04.588 user 1m1.074s 00:09:04.588 sys 0m5.760s 00:09:04.588 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.588 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.588 ************************************ 00:09:04.588 END TEST lvs_grow_dirty 00:09:04.588 ************************************ 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:04.853 nvmf_trace.0 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.853 rmmod nvme_tcp 00:09:04.853 rmmod nvme_fabrics 00:09:04.853 rmmod nvme_keyring 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1564491 ']' 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1564491 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1564491 ']' 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1564491 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1564491 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1564491' 00:09:04.853 killing process with pid 1564491 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1564491 00:09:04.853 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1564491 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.151 19:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.681 00:09:07.681 real 0m54.256s 00:09:07.681 user 1m31.816s 00:09:07.681 sys 0m11.389s 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 ************************************ 00:09:07.681 END TEST nvmf_lvs_grow 00:09:07.681 ************************************ 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 ************************************ 00:09:07.681 START TEST nvmf_bdev_io_wait 00:09:07.681 ************************************ 00:09:07.681 19:01:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:07.681 * Looking for test storage... 00:09:07.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.681 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.681 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:07.681 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.681 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.681 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.681 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.682 19:01:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.230 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:10.231 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:10.231 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:10.231 Found net devices under 0000:84:00.0: cvl_0_0 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:10.231 Found net devices under 0000:84:00.1: cvl_0_1 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.231 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.490 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.490 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.490 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.490 19:01:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:09:10.490 00:09:10.490 --- 10.0.0.2 ping statistics --- 00:09:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.490 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:09:10.490 00:09:10.490 --- 10.0.0.1 ping statistics --- 00:09:10.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.490 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1567429 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1567429 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1567429 ']' 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.490 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.491 19:01:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:10.491 [2024-07-24 19:01:16.095306] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:10.491 [2024-07-24 19:01:16.095404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.491 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.749 [2024-07-24 19:01:16.202381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.749 [2024-07-24 19:01:16.418471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.749 [2024-07-24 19:01:16.418581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.749 [2024-07-24 19:01:16.418616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.749 [2024-07-24 19:01:16.418645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.749 [2024-07-24 19:01:16.418670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.749 [2024-07-24 19:01:16.418841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.749 [2024-07-24 19:01:16.418903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.749 [2024-07-24 19:01:16.418958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.749 [2024-07-24 19:01:16.418962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.691 [2024-07-24 19:01:17.339444] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.691 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.951 Malloc0 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.951 [2024-07-24 19:01:17.412616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1567586 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1567588 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.951 { 00:09:11.951 "params": { 00:09:11.951 "name": "Nvme$subsystem", 00:09:11.951 "trtype": "$TEST_TRANSPORT", 00:09:11.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.951 "adrfam": "ipv4", 00:09:11.951 "trsvcid": "$NVMF_PORT", 00:09:11.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.951 "hdgst": ${hdgst:-false}, 00:09:11.951 "ddgst": ${ddgst:-false} 00:09:11.951 }, 00:09:11.951 "method": "bdev_nvme_attach_controller" 00:09:11.951 } 00:09:11.951 EOF 00:09:11.951 )") 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1567590 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.951 { 00:09:11.951 "params": { 00:09:11.951 "name": "Nvme$subsystem", 00:09:11.951 "trtype": "$TEST_TRANSPORT", 00:09:11.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.951 "adrfam": "ipv4", 00:09:11.951 "trsvcid": "$NVMF_PORT", 00:09:11.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.951 "hdgst": ${hdgst:-false}, 00:09:11.951 "ddgst": ${ddgst:-false} 00:09:11.951 }, 00:09:11.951 "method": "bdev_nvme_attach_controller" 00:09:11.951 } 00:09:11.951 EOF 00:09:11.951 )") 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1567593 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.951 { 00:09:11.951 "params": { 00:09:11.951 "name": "Nvme$subsystem", 00:09:11.951 "trtype": "$TEST_TRANSPORT", 00:09:11.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.951 "adrfam": "ipv4", 00:09:11.951 "trsvcid": "$NVMF_PORT", 00:09:11.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.951 "hdgst": ${hdgst:-false}, 00:09:11.951 "ddgst": ${ddgst:-false} 00:09:11.951 }, 00:09:11.951 "method": "bdev_nvme_attach_controller" 00:09:11.951 } 00:09:11.951 EOF 00:09:11.951 )") 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:11.951 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:11.951 { 00:09:11.951 "params": { 00:09:11.951 "name": "Nvme$subsystem", 00:09:11.952 "trtype": "$TEST_TRANSPORT", 00:09:11.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:11.952 "adrfam": "ipv4", 00:09:11.952 "trsvcid": "$NVMF_PORT", 00:09:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:11.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:11.952 "hdgst": ${hdgst:-false}, 00:09:11.952 "ddgst": ${ddgst:-false} 00:09:11.952 }, 00:09:11.952 "method": "bdev_nvme_attach_controller" 00:09:11.952 } 00:09:11.952 EOF 00:09:11.952 )") 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1567586 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.952 "params": { 00:09:11.952 "name": "Nvme1", 00:09:11.952 "trtype": "tcp", 00:09:11.952 "traddr": "10.0.0.2", 00:09:11.952 "adrfam": "ipv4", 00:09:11.952 "trsvcid": "4420", 00:09:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.952 "hdgst": false, 00:09:11.952 "ddgst": false 00:09:11.952 }, 00:09:11.952 "method": "bdev_nvme_attach_controller" 00:09:11.952 }' 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.952 "params": { 00:09:11.952 "name": "Nvme1", 00:09:11.952 "trtype": "tcp", 00:09:11.952 "traddr": "10.0.0.2", 00:09:11.952 "adrfam": "ipv4", 00:09:11.952 "trsvcid": "4420", 00:09:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.952 "hdgst": false, 00:09:11.952 "ddgst": false 00:09:11.952 }, 00:09:11.952 "method": "bdev_nvme_attach_controller" 00:09:11.952 }' 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.952 "params": { 00:09:11.952 "name": "Nvme1", 00:09:11.952 "trtype": "tcp", 00:09:11.952 "traddr": "10.0.0.2", 00:09:11.952 "adrfam": "ipv4", 00:09:11.952 "trsvcid": "4420", 00:09:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.952 "hdgst": false, 00:09:11.952 "ddgst": false 00:09:11.952 }, 00:09:11.952 "method": "bdev_nvme_attach_controller" 00:09:11.952 }' 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:11.952 19:01:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:11.952 "params": { 00:09:11.952 "name": "Nvme1", 00:09:11.952 "trtype": "tcp", 00:09:11.952 "traddr": "10.0.0.2", 00:09:11.952 "adrfam": "ipv4", 00:09:11.952 "trsvcid": "4420", 00:09:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:11.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:11.952 "hdgst": false, 00:09:11.952 "ddgst": false 00:09:11.952 }, 00:09:11.952 "method": "bdev_nvme_attach_controller" 00:09:11.952 }' 00:09:11.952 [2024-07-24 19:01:17.463624] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:11.952 [2024-07-24 19:01:17.463636] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:11.952 [2024-07-24 19:01:17.463624] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:11.952 [2024-07-24 19:01:17.463723] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:01:17.463724] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:01:17.463724] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:11.952 --proc-type=auto ] 00:09:11.952 --proc-type=auto ] 00:09:11.952 [2024-07-24 19:01:17.476602] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:11.952 [2024-07-24 19:01:17.476695] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:11.952 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.952 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.210 [2024-07-24 19:01:17.648350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.210 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.210 [2024-07-24 19:01:17.756489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.210 [2024-07-24 19:01:17.772436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:12.210 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.210 [2024-07-24 19:01:17.867148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.210 [2024-07-24 19:01:17.879377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:12.469 [2024-07-24 19:01:17.949239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.469 [2024-07-24 19:01:17.987615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:12.469 [2024-07-24 19:01:18.062583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:12.469 Running I/O for 1 seconds... 00:09:12.727 Running I/O for 1 seconds... 00:09:12.728 Running I/O for 1 seconds... 00:09:12.728 Running I/O for 1 seconds... 00:09:13.663 00:09:13.663 Latency(us) 00:09:13.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.663 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:13.663 Nvme1n1 : 1.02 4816.08 18.81 0.00 0.00 26212.58 7281.78 36505.98 00:09:13.663 =================================================================================================================== 00:09:13.663 Total : 4816.08 18.81 0.00 0.00 26212.58 7281.78 36505.98 00:09:13.663 00:09:13.663 Latency(us) 00:09:13.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.663 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:13.663 Nvme1n1 : 1.00 150950.94 589.65 0.00 0.00 843.66 344.37 1055.86 00:09:13.663 =================================================================================================================== 00:09:13.663 Total : 150950.94 589.65 0.00 0.00 843.66 344.37 1055.86 00:09:13.663 00:09:13.663 Latency(us) 00:09:13.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.663 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:13.663 Nvme1n1 : 1.01 4711.47 18.40 0.00 0.00 27028.07 9951.76 46215.02 00:09:13.663 =================================================================================================================== 00:09:13.663 Total : 4711.47 18.40 0.00 0.00 27028.07 9951.76 46215.02 00:09:13.921 00:09:13.921 Latency(us) 00:09:13.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.921 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:13.921 Nvme1n1 : 1.01 7688.26 30.03 0.00 0.00 16566.28 8738.13 29709.65 00:09:13.921 =================================================================================================================== 00:09:13.921 Total : 7688.26 30.03 0.00 0.00 16566.28 8738.13 29709.65 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1567588 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1567590 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1567593 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.179 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.438 rmmod nvme_tcp 00:09:14.438 rmmod nvme_fabrics 00:09:14.438 rmmod nvme_keyring 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1567429 ']' 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1567429 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1567429 ']' 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1567429 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1567429 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1567429' 00:09:14.438 killing process with pid 1567429 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1567429 00:09:14.438 19:01:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1567429 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.697 19:01:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.230 00:09:17.230 real 0m9.437s 00:09:17.230 user 0m22.440s 00:09:17.230 sys 0m4.485s 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.230 ************************************ 00:09:17.230 END TEST nvmf_bdev_io_wait 00:09:17.230 ************************************ 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.230 ************************************ 00:09:17.230 START TEST nvmf_queue_depth 00:09:17.230 ************************************ 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:17.230 * Looking for test storage... 00:09:17.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.230 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:17.231 19:01:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:19.768 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:19.768 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:19.768 Found net devices under 0000:84:00.0: cvl_0_0 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.768 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:19.769 Found net devices under 0000:84:00.1: cvl_0_1 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:19.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:19.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:09:19.769 00:09:19.769 --- 10.0.0.2 ping statistics --- 00:09:19.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.769 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:19.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:19.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:09:19.769 00:09:19.769 --- 10.0.0.1 ping statistics --- 00:09:19.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:19.769 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1569958 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1569958 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1569958 ']' 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.769 19:01:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:19.769 [2024-07-24 19:01:25.391913] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:19.769 [2024-07-24 19:01:25.392078] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.028 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.028 [2024-07-24 19:01:25.514732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.028 [2024-07-24 19:01:25.653771] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.028 [2024-07-24 19:01:25.653849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.028 [2024-07-24 19:01:25.653869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.028 [2024-07-24 19:01:25.653884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.028 [2024-07-24 19:01:25.653898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.028 [2024-07-24 19:01:25.653955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.991 [2024-07-24 19:01:26.473300] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:20.991 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.992 Malloc0 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.992 [2024-07-24 19:01:26.542768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1570114 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1570114 /var/tmp/bdevperf.sock 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1570114 ']' 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.992 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:20.992 [2024-07-24 19:01:26.602395] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:20.992 [2024-07-24 19:01:26.602496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570114 ] 00:09:20.992 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.992 [2024-07-24 19:01:26.684515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.250 [2024-07-24 19:01:26.823189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.508 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.508 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:21.508 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:21.508 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.508 19:01:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.508 NVMe0n1 00:09:21.508 19:01:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.509 19:01:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.767 Running I/O for 10 seconds... 00:09:32.839 00:09:32.839 Latency(us) 00:09:32.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.839 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:32.839 Verification LBA range: start 0x0 length 0x4000 00:09:32.839 NVMe0n1 : 10.09 6692.82 26.14 0.00 0.00 152201.49 17864.63 92818.39 00:09:32.839 =================================================================================================================== 00:09:32.839 Total : 6692.82 26.14 0.00 0.00 152201.49 17864.63 92818.39 00:09:32.839 0 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1570114 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1570114 ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1570114 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1570114 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1570114' 00:09:32.839 killing process with pid 1570114 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1570114 00:09:32.839 Received shutdown signal, test time was about 10.000000 seconds 00:09:32.839 00:09:32.839 Latency(us) 00:09:32.839 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.839 =================================================================================================================== 00:09:32.839 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1570114 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.839 rmmod nvme_tcp 00:09:32.839 rmmod nvme_fabrics 00:09:32.839 rmmod nvme_keyring 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1569958 ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1569958 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1569958 ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1569958 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1569958 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1569958' 00:09:32.839 killing process with pid 1569958 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1569958 00:09:32.839 19:01:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1569958 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.839 19:01:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:34.745 00:09:34.745 real 0m17.780s 00:09:34.745 user 0m24.193s 00:09:34.745 sys 0m3.916s 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:34.745 ************************************ 00:09:34.745 END TEST nvmf_queue_depth 00:09:34.745 ************************************ 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.745 ************************************ 00:09:34.745 START TEST nvmf_target_multipath 00:09:34.745 ************************************ 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:34.745 * Looking for test storage... 00:09:34.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:34.745 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:34.746 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:34.746 19:01:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.034 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:38.035 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:38.035 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:38.035 Found net devices under 0000:84:00.0: cvl_0_0 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:38.035 Found net devices under 0000:84:00.1: cvl_0_1 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:09:38.035 00:09:38.035 --- 10.0.0.2 ping statistics --- 00:09:38.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.035 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:09:38.035 00:09:38.035 --- 10.0.0.1 ping statistics --- 00:09:38.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.035 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:38.035 only one NIC for nvmf test 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.035 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.035 rmmod nvme_tcp 00:09:38.035 rmmod nvme_fabrics 00:09:38.035 rmmod nvme_keyring 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.036 19:01:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.939 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.940 00:09:39.940 real 0m5.324s 00:09:39.940 user 0m0.922s 00:09:39.940 sys 0m2.400s 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.940 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:39.940 ************************************ 00:09:39.940 END TEST nvmf_target_multipath 00:09:39.940 ************************************ 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:40.199 ************************************ 00:09:40.199 START TEST nvmf_zcopy 00:09:40.199 ************************************ 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:40.199 * Looking for test storage... 00:09:40.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.199 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.200 19:01:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:43.486 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:43.486 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:43.486 Found net devices under 0000:84:00.0: cvl_0_0 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.486 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:43.486 Found net devices under 0000:84:00.1: cvl_0_1 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:09:43.487 00:09:43.487 --- 10.0.0.2 ping statistics --- 00:09:43.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.487 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:09:43.487 00:09:43.487 --- 10.0.0.1 ping statistics --- 00:09:43.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.487 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1575460 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1575460 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1575460 ']' 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.487 19:01:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.487 [2024-07-24 19:01:48.816100] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:43.487 [2024-07-24 19:01:48.816194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.487 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.487 [2024-07-24 19:01:48.904642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.487 [2024-07-24 19:01:49.042053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.487 [2024-07-24 19:01:49.042113] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.487 [2024-07-24 19:01:49.042133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.487 [2024-07-24 19:01:49.042150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.487 [2024-07-24 19:01:49.042164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.487 [2024-07-24 19:01:49.042199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.421 [2024-07-24 19:01:50.091918] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.421 [2024-07-24 19:01:50.108122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.421 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.679 malloc0 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.679 { 00:09:44.679 "params": { 00:09:44.679 "name": "Nvme$subsystem", 00:09:44.679 "trtype": "$TEST_TRANSPORT", 00:09:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.679 "adrfam": "ipv4", 00:09:44.679 "trsvcid": "$NVMF_PORT", 00:09:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.679 "hdgst": ${hdgst:-false}, 00:09:44.679 "ddgst": ${ddgst:-false} 00:09:44.679 }, 00:09:44.679 "method": "bdev_nvme_attach_controller" 00:09:44.679 } 00:09:44.679 EOF 00:09:44.679 )") 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:44.679 19:01:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.679 "params": { 00:09:44.679 "name": "Nvme1", 00:09:44.679 "trtype": "tcp", 00:09:44.679 "traddr": "10.0.0.2", 00:09:44.679 "adrfam": "ipv4", 00:09:44.679 "trsvcid": "4420", 00:09:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.679 "hdgst": false, 00:09:44.679 "ddgst": false 00:09:44.679 }, 00:09:44.679 "method": "bdev_nvme_attach_controller" 00:09:44.679 }' 00:09:44.679 [2024-07-24 19:01:50.210874] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:44.679 [2024-07-24 19:01:50.210961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575615 ] 00:09:44.679 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.679 [2024-07-24 19:01:50.288045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.937 [2024-07-24 19:01:50.431698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.194 Running I/O for 10 seconds... 00:09:55.200 00:09:55.200 Latency(us) 00:09:55.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:55.200 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:55.200 Verification LBA range: start 0x0 length 0x1000 00:09:55.200 Nvme1n1 : 10.02 4560.46 35.63 0.00 0.00 27979.98 1098.33 37865.24 00:09:55.200 =================================================================================================================== 00:09:55.200 Total : 4560.46 35.63 0.00 0.00 27979.98 1098.33 37865.24 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1576935 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:55.767 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:55.767 { 00:09:55.767 "params": { 00:09:55.767 "name": "Nvme$subsystem", 00:09:55.767 "trtype": "$TEST_TRANSPORT", 00:09:55.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:55.767 "adrfam": "ipv4", 00:09:55.767 "trsvcid": "$NVMF_PORT", 00:09:55.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:55.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:55.767 "hdgst": ${hdgst:-false}, 00:09:55.767 "ddgst": ${ddgst:-false} 00:09:55.767 }, 00:09:55.768 "method": "bdev_nvme_attach_controller" 00:09:55.768 } 00:09:55.768 EOF 00:09:55.768 )") 00:09:55.768 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:55.768 [2024-07-24 19:02:01.171585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.171637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:55.768 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:55.768 19:02:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:55.768 "params": { 00:09:55.768 "name": "Nvme1", 00:09:55.768 "trtype": "tcp", 00:09:55.768 "traddr": "10.0.0.2", 00:09:55.768 "adrfam": "ipv4", 00:09:55.768 "trsvcid": "4420", 00:09:55.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:55.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:55.768 "hdgst": false, 00:09:55.768 "ddgst": false 00:09:55.768 }, 00:09:55.768 "method": "bdev_nvme_attach_controller" 00:09:55.768 }' 00:09:55.768 [2024-07-24 19:02:01.179555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.179588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.187563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.187593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.195572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.195602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.203594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.203623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.211619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.211648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.219641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.219670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.221861] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:09:55.768 [2024-07-24 19:02:01.221945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576935 ] 00:09:55.768 [2024-07-24 19:02:01.227661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.227709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.235702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.235733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.243722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.243752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.251748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.251778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.259773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.259803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.768 [2024-07-24 19:02:01.267798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.267828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.275819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.275849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.283841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.283871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.291863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.291893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.299889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.299920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.306042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.768 [2024-07-24 19:02:01.307908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.307939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.315954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.315994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.323972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.324009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.331979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.332008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.339999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.340029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.348021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.348050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.356045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.356075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.364066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.364096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.372089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.372120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.384146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.384182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.392166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.392203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.400167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.400199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.408189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.408220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.416211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.416241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.424235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.424265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.432260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.432290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.440279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.440310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.448301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.448339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.768 [2024-07-24 19:02:01.452000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.768 [2024-07-24 19:02:01.456322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.768 [2024-07-24 19:02:01.456353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.027 [2024-07-24 19:02:01.464345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.464375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.472388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.472425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.480410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.480454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.488436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.488487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.496456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.496507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.504495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.504533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.512518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.512555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.520538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.520573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.528559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.528590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.536566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.536596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.544596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.544630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.552623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.552659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.560640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.560690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.568651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.568696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.576690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.576719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.584711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.584741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.592812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.592849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.600830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.600864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.608853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.608886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.616875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.616908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.624897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.624929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.632920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.632951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.640945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.640975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.648968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.648998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.656995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.657027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.665022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.665055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.673051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.673084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.681072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.681102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.689132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.689173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.697147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.697180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 Running I/O for 5 seconds... 00:09:56.028 [2024-07-24 19:02:01.705171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.705201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.028 [2024-07-24 19:02:01.723080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.028 [2024-07-24 19:02:01.723116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.738396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.738443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.753778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.753822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.768891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.768929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.783703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.783750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.798691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.798729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.813486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.813530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.828301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.828338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.842504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.842541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.856649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.856712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.871197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.871235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.885932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.885970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.286 [2024-07-24 19:02:01.900327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.286 [2024-07-24 19:02:01.900365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.287 [2024-07-24 19:02:01.914819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.287 [2024-07-24 19:02:01.914857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.287 [2024-07-24 19:02:01.929261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.287 [2024-07-24 19:02:01.929300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.287 [2024-07-24 19:02:01.943350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.287 [2024-07-24 19:02:01.943388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.287 [2024-07-24 19:02:01.958072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.287 [2024-07-24 19:02:01.958110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.287 [2024-07-24 19:02:01.973013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.287 [2024-07-24 19:02:01.973051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:01.987924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:01.987962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.002630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.002666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.017183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.017220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.031250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.031288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.045876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.045913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.060301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.060347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.074893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.074930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.088915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.088954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.103279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.103317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.117653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.117705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.131959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.131996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.146323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.146361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.161144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.161182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.175588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.175625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.189747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.189785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.204386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.204423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.219234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.219272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.545 [2024-07-24 19:02:02.233402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.545 [2024-07-24 19:02:02.233469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.248070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.248106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.262791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.262838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.277085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.277122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.291911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.291958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.306478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.306519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.321016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.321054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.336026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.336073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.350865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.350902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.365239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.365276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.379931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.379968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.394082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.394119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.409339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.409377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.423705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.423743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.438826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.438863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.453403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.453465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.468366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.468414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.482657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.482715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.804 [2024-07-24 19:02:02.498046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.804 [2024-07-24 19:02:02.498086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.512620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.512655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.527240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.527277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.541829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.541867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.556164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.556202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.570725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.570769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.585123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.585160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.599867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.599905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.614833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.614899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.629090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.629127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.643147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.643184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.657522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.657556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.671936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.671977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.687029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.687070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.700612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.700647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.714574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.714610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.728805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.728842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.743548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.743583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.063 [2024-07-24 19:02:02.758633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.063 [2024-07-24 19:02:02.758683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.771871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.771908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.786382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.786419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.800421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.800492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.814832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.814868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.829118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.829155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.843528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.843564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.858244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.858281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.872379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.872418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.887443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.887498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.902599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.902635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.916786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.916823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.931782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.931820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.946105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.946143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.960580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.960617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.975268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.975306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:02.990118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:02.990156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.322 [2024-07-24 19:02:03.004685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.322 [2024-07-24 19:02:03.004730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.019109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.019166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.033757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.033795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.048204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.048254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.062619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.062656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.077064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.077101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.092144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.092181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.106405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.106451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.120723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.120760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.134966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.135003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.150100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.150136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.164758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.164810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.179779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.179816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.194357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.194394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.209105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.209142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.224832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.581 [2024-07-24 19:02:03.224869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.581 [2024-07-24 19:02:03.239954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.582 [2024-07-24 19:02:03.239990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.582 [2024-07-24 19:02:03.254557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.582 [2024-07-24 19:02:03.254593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.582 [2024-07-24 19:02:03.269254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.582 [2024-07-24 19:02:03.269294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.283703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.283741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.297987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.298025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.312369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.312414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.326586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.326622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.340916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.340955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.355742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.355780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.370006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.370043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.385040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.385079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.400277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.400314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.414627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.414663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.428883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.428922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.443205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.443243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.457511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.457549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.471981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.472021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.486341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.486379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.500038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.500076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.514450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.514487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.845 [2024-07-24 19:02:03.529069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.845 [2024-07-24 19:02:03.529106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.105 [2024-07-24 19:02:03.543386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.105 [2024-07-24 19:02:03.543423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.105 [2024-07-24 19:02:03.558162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.105 [2024-07-24 19:02:03.558200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.105 [2024-07-24 19:02:03.572922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.105 [2024-07-24 19:02:03.572959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.105 [2024-07-24 19:02:03.587918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.105 [2024-07-24 19:02:03.587955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.105 [2024-07-24 19:02:03.602094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.105 [2024-07-24 19:02:03.602131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.105 [2024-07-24 19:02:03.616777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.105 [2024-07-24 19:02:03.616814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.105 [2024-07-24 19:02:03.631447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.105 [2024-07-24 19:02:03.631483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.645756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.645794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.660103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.660151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.674530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.674574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.689018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.689054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.703623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.703660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.718180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.718218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.732862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.732910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.746875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.746912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.761068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.761108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.775550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.775588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.106 [2024-07-24 19:02:03.789709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.106 [2024-07-24 19:02:03.789746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.803946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.803993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.818036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.818073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.832405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.832452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.846683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.846720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.861094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.861132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.875255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.875292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.889412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.889470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.903908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.903944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.918050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.918087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.932488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.932525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.946571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.946608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.961380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.961417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.975331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.975369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:03.989652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:03.989689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:04.003955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:04.003992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:04.018065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:04.018103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:04.032267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.364 [2024-07-24 19:02:04.032304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.364 [2024-07-24 19:02:04.046034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.365 [2024-07-24 19:02:04.046072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.365 [2024-07-24 19:02:04.060407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.365 [2024-07-24 19:02:04.060466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.074663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.074700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.089133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.089169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.103477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.103514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.118149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.118198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.132461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.132504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.147128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.147165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.161031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.161068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.175391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.175440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.189950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.189988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.204334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.204372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.218960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.218998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.233249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.233288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.247421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.247481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.261893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.261930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.276241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.276278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.290296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.290337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.303946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.303984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.623 [2024-07-24 19:02:04.318670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.623 [2024-07-24 19:02:04.318706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.332880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.332918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.347668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.347705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.361800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.361837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.376024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.376061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.390757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.390794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.405169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.405206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.419639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.419675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.433663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.433703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.447660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.447698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.461048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.461086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.475503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.475541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.489753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.489791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.503797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.503834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.518177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.518224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.532611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.532649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.546857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.546894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.560817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.560855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.882 [2024-07-24 19:02:04.575462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.882 [2024-07-24 19:02:04.575510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.589606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.589654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.604713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.604750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.619011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.619048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.632959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.632996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.647306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.647344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.661475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.661515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.676146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.676185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.690470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.690508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.704853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.140 [2024-07-24 19:02:04.704890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.140 [2024-07-24 19:02:04.718940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.718977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.733261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.733297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.747361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.747398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.761769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.761806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.775757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.775794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.790015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.790062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.804847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.804884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.818447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.818483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.141 [2024-07-24 19:02:04.832828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.141 [2024-07-24 19:02:04.832865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.847137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.847175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.861310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.861347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.875459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.875496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.889921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.889958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.904302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.904339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.918317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.918354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.932125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.932162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.946460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.946498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.960948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.960985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.975235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.975272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:04.989260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.399 [2024-07-24 19:02:04.989297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.399 [2024-07-24 19:02:05.003542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.400 [2024-07-24 19:02:05.003580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.400 [2024-07-24 19:02:05.017831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.400 [2024-07-24 19:02:05.017868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.400 [2024-07-24 19:02:05.032197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.400 [2024-07-24 19:02:05.032234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.400 [2024-07-24 19:02:05.046532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.400 [2024-07-24 19:02:05.046569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.400 [2024-07-24 19:02:05.060583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.400 [2024-07-24 19:02:05.060630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.400 [2024-07-24 19:02:05.074679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.400 [2024-07-24 19:02:05.074716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.400 [2024-07-24 19:02:05.088842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.400 [2024-07-24 19:02:05.088879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.103571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.103608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.117386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.117423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.131092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.131129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.145970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.146008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.160376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.160413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.174887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.174925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.189652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.189689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.203877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.203914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.218234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.218272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.232965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.233002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.247304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.247342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.261741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.261780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.275633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.275671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.289592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.289630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.304208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.304251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.318526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.318563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.332833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.332880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.658 [2024-07-24 19:02:05.347339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.658 [2024-07-24 19:02:05.347376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.361495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.361533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.375233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.375270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.389374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.389412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.404068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.404106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.417957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.417995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.431914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.431953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.445594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.445631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.460104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.460142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.474392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.474444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.488880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.488916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.502694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.502731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.517272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.517321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.531511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.531551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.545885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.545922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.560119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.560156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.574342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.574379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.588674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.588712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.917 [2024-07-24 19:02:05.603097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.917 [2024-07-24 19:02:05.603135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.617440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.617479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.631614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.631651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.645376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.645413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.659538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.659575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.673757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.673795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.687673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.687711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.701640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.701679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.715463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.715500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.729262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.729299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.744212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.744251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.758466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.758503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.772839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.772878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.787346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.787391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.801353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.801390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.816059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.816099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.830536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.830573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.844985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.845022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.176 [2024-07-24 19:02:05.859131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.176 [2024-07-24 19:02:05.859168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-24 19:02:05.873783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-24 19:02:05.873836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-24 19:02:05.888389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-24 19:02:05.888426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-24 19:02:05.902849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.434 [2024-07-24 19:02:05.902886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.434 [2024-07-24 19:02:05.916844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:05.916880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:05.931190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:05.931227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:05.945628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:05.945665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:05.960282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:05.960319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:05.974565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:05.974603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:05.989020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:05.989057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.002551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.002589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.016845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.016883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.031302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.031351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.046023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.046060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.060422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.060469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.074480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.074524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.089388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.089425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.103692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.103729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.435 [2024-07-24 19:02:06.118224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.435 [2024-07-24 19:02:06.118262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.132898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.132934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.147159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.147195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.161866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.161903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.176210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.176247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.190465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.190501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.204714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.204750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.218987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.219023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.233366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.233402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.247334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.247371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.261394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.261441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.275355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.693 [2024-07-24 19:02:06.275392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.693 [2024-07-24 19:02:06.289170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.289208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.694 [2024-07-24 19:02:06.303053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.303089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.694 [2024-07-24 19:02:06.317078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.317115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.694 [2024-07-24 19:02:06.330821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.330858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.694 [2024-07-24 19:02:06.345217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.345254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.694 [2024-07-24 19:02:06.359282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.359319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.694 [2024-07-24 19:02:06.373686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.373733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.694 [2024-07-24 19:02:06.387698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.694 [2024-07-24 19:02:06.387735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.952 [2024-07-24 19:02:06.402292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.952 [2024-07-24 19:02:06.402340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.952 [2024-07-24 19:02:06.416988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.952 [2024-07-24 19:02:06.417037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.952 [2024-07-24 19:02:06.431180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.952 [2024-07-24 19:02:06.431217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.952 [2024-07-24 19:02:06.445951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.952 [2024-07-24 19:02:06.445995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.952 [2024-07-24 19:02:06.460183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.952 [2024-07-24 19:02:06.460221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.952 [2024-07-24 19:02:06.474555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.952 [2024-07-24 19:02:06.474593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.952 [2024-07-24 19:02:06.489138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.952 [2024-07-24 19:02:06.489176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.503354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.503392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.518051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.518089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.532790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.532830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.546795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.546843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.560860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.560896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.575258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.575295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.589272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.589308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.603636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.603674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.617969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.618007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.632274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.632312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.953 [2024-07-24 19:02:06.646615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.953 [2024-07-24 19:02:06.646651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.661027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.661063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.675806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.675859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.689722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.689759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.704115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.704153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.717887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.717925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.728711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.728748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 00:10:01.212 Latency(us) 00:10:01.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.212 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:01.212 Nvme1n1 : 5.01 8825.39 68.95 0.00 0.00 14481.59 6796.33 26408.58 00:10:01.212 =================================================================================================================== 00:10:01.212 Total : 8825.39 68.95 0.00 0.00 14481.59 6796.33 26408.58 00:10:01.212 [2024-07-24 19:02:06.733392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.733425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.741417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.741462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.749443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.749477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.757469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.757501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.765520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.765568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.773541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.773590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.781559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.781604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.789571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.789614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.797602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.797653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.805630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.805682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.813649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.813699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.821674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.821734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.829699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.829753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.837720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.837766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.845735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.845785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.853760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.853810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.861777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.861826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.869810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.869855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.877831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.877879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.885843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.885890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.893831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.893861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.212 [2024-07-24 19:02:06.901851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.212 [2024-07-24 19:02:06.901880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.909886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.909915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.917894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.917924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.925916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.925945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.933953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.933987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.941997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.942043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.950015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.950062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.958011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.958044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.966026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.966058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.974045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.974088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.982070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.982100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.990093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.470 [2024-07-24 19:02:06.990123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.470 [2024-07-24 19:02:06.998118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.471 [2024-07-24 19:02:06.998149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.471 [2024-07-24 19:02:07.006177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.471 [2024-07-24 19:02:07.006224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.471 [2024-07-24 19:02:07.022270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.471 [2024-07-24 19:02:07.022327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.471 [2024-07-24 19:02:07.030212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.471 [2024-07-24 19:02:07.030243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.471 [2024-07-24 19:02:07.038231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.471 [2024-07-24 19:02:07.038261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.471 [2024-07-24 19:02:07.046256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.471 [2024-07-24 19:02:07.046285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1576935) - No such process 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1576935 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.471 delay0 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.471 19:02:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:01.471 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.729 [2024-07-24 19:02:07.220634] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:08.290 Initializing NVMe Controllers 00:10:08.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:08.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:08.290 Initialization complete. Launching workers. 00:10:08.290 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 8756 00:10:08.290 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 8924, failed to submit 98 00:10:08.290 success 8827, unsuccess 97, failed 0 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:08.290 rmmod nvme_tcp 00:10:08.290 rmmod nvme_fabrics 00:10:08.290 rmmod nvme_keyring 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1575460 ']' 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1575460 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1575460 ']' 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1575460 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1575460 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1575460' 00:10:08.290 killing process with pid 1575460 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1575460 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1575460 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.290 19:02:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.830 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:10.830 00:10:10.830 real 0m30.291s 00:10:10.830 user 0m42.622s 00:10:10.830 sys 0m10.298s 00:10:10.830 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.830 19:02:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.830 ************************************ 00:10:10.830 END TEST nvmf_zcopy 00:10:10.830 ************************************ 00:10:10.830 19:02:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.831 ************************************ 00:10:10.831 START TEST nvmf_nmic 00:10:10.831 ************************************ 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.831 * Looking for test storage... 00:10:10.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:10.831 19:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:13.411 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:13.412 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:13.412 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:13.412 Found net devices under 0000:84:00.0: cvl_0_0 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:13.412 Found net devices under 0000:84:00.1: cvl_0_1 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:13.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:10:13.412 00:10:13.412 --- 10.0.0.2 ping statistics --- 00:10:13.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.412 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:13.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:10:13.412 00:10:13.412 --- 10.0.0.1 ping statistics --- 00:10:13.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.412 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1580356 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1580356 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1580356 ']' 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.412 19:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.412 [2024-07-24 19:02:18.990855] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:10:13.412 [2024-07-24 19:02:18.990947] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.412 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.412 [2024-07-24 19:02:19.085263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.671 [2024-07-24 19:02:19.289229] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.671 [2024-07-24 19:02:19.289339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.671 [2024-07-24 19:02:19.289374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.671 [2024-07-24 19:02:19.289402] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.671 [2024-07-24 19:02:19.289442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.671 [2024-07-24 19:02:19.289553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.671 [2024-07-24 19:02:19.289618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.671 [2024-07-24 19:02:19.289679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.671 [2024-07-24 19:02:19.289683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.603 [2024-07-24 19:02:20.284614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.603 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.861 Malloc0 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.861 [2024-07-24 19:02:20.342497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:14.861 test case1: single bdev can't be used in multiple subsystems 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.861 [2024-07-24 19:02:20.366246] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:14.861 [2024-07-24 19:02:20.366288] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:14.861 [2024-07-24 19:02:20.366309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.861 request: 00:10:14.861 { 00:10:14.861 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:14.861 "namespace": { 00:10:14.861 "bdev_name": "Malloc0", 00:10:14.861 "no_auto_visible": false 00:10:14.861 }, 00:10:14.861 "method": "nvmf_subsystem_add_ns", 00:10:14.861 "req_id": 1 00:10:14.861 } 00:10:14.861 Got JSON-RPC error response 00:10:14.861 response: 00:10:14.861 { 00:10:14.861 "code": -32602, 00:10:14.861 "message": "Invalid parameters" 00:10:14.861 } 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:14.861 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:14.861 Adding namespace failed - expected result. 00:10:14.862 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:14.862 test case2: host connect to nvmf target in multiple paths 00:10:14.862 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:14.862 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.862 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:14.862 [2024-07-24 19:02:20.378412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:14.862 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.862 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.426 19:02:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:15.991 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.991 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.991 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.991 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:15.991 19:02:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:18.514 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:18.514 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:18.514 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:18.514 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:18.514 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:18.514 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:18.514 19:02:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:18.514 [global] 00:10:18.514 thread=1 00:10:18.514 invalidate=1 00:10:18.514 rw=write 00:10:18.514 time_based=1 00:10:18.514 runtime=1 00:10:18.514 ioengine=libaio 00:10:18.514 direct=1 00:10:18.514 bs=4096 00:10:18.514 iodepth=1 00:10:18.514 norandommap=0 00:10:18.514 numjobs=1 00:10:18.514 00:10:18.514 verify_dump=1 00:10:18.514 verify_backlog=512 00:10:18.514 verify_state_save=0 00:10:18.514 do_verify=1 00:10:18.514 verify=crc32c-intel 00:10:18.514 [job0] 00:10:18.514 filename=/dev/nvme0n1 00:10:18.514 Could not set queue depth (nvme0n1) 00:10:18.514 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.514 fio-3.35 00:10:18.514 Starting 1 thread 00:10:19.448 00:10:19.448 job0: (groupid=0, jobs=1): err= 0: pid=1581120: Wed Jul 24 19:02:25 2024 00:10:19.448 read: IOPS=21, BW=85.7KiB/s (87.7kB/s)(88.0KiB/1027msec) 00:10:19.448 slat (nsec): min=10500, max=49638, avg=21901.36, stdev=10515.92 00:10:19.448 clat (usec): min=445, max=41956, avg=37341.61, stdev=11932.46 00:10:19.448 lat (usec): min=459, max=41971, avg=37363.51, stdev=11935.54 00:10:19.448 clat percentiles (usec): 00:10:19.448 | 1.00th=[ 445], 5.00th=[ 523], 10.00th=[40633], 20.00th=[40633], 00:10:19.448 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:19.448 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:19.448 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:19.448 | 99.99th=[42206] 00:10:19.448 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:19.448 slat (usec): min=9, max=33031, avg=81.32, stdev=1459.06 00:10:19.448 clat (usec): min=206, max=469, avg=314.50, stdev=48.18 00:10:19.448 lat (usec): min=217, max=33443, avg=395.83, stdev=1464.22 00:10:19.448 clat percentiles (usec): 00:10:19.448 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 253], 20.00th=[ 273], 00:10:19.448 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:10:19.448 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 383], 00:10:19.448 | 99.00th=[ 433], 99.50th=[ 457], 99.90th=[ 469], 99.95th=[ 469], 00:10:19.448 | 99.99th=[ 469] 00:10:19.448 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:19.448 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:19.448 lat (usec) : 250=8.99%, 500=87.08%, 750=0.19% 00:10:19.448 lat (msec) : 50=3.75% 00:10:19.448 cpu : usr=0.68%, sys=0.97%, ctx=536, majf=0, minf=2 00:10:19.448 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:19.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:19.448 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:19.448 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:19.448 00:10:19.448 Run status group 0 (all jobs): 00:10:19.448 READ: bw=85.7KiB/s (87.7kB/s), 85.7KiB/s-85.7KiB/s (87.7kB/s-87.7kB/s), io=88.0KiB (90.1kB), run=1027-1027msec 00:10:19.448 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:10:19.448 00:10:19.448 Disk stats (read/write): 00:10:19.448 nvme0n1: ios=45/512, merge=0/0, ticks=1645/159, in_queue=1804, util=98.80% 00:10:19.448 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:19.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.706 rmmod nvme_tcp 00:10:19.706 rmmod nvme_fabrics 00:10:19.706 rmmod nvme_keyring 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1580356 ']' 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1580356 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1580356 ']' 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1580356 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:19.706 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:19.707 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1580356 00:10:19.707 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:19.707 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:19.707 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1580356' 00:10:19.707 killing process with pid 1580356 00:10:19.707 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1580356 00:10:19.707 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1580356 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.274 19:02:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.176 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:22.176 00:10:22.176 real 0m11.811s 00:10:22.176 user 0m27.328s 00:10:22.176 sys 0m2.997s 00:10:22.176 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.176 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.176 ************************************ 00:10:22.176 END TEST nvmf_nmic 00:10:22.176 ************************************ 00:10:22.435 19:02:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:22.435 19:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:22.435 19:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.435 19:02:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.436 ************************************ 00:10:22.436 START TEST nvmf_fio_target 00:10:22.436 ************************************ 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:22.436 * Looking for test storage... 00:10:22.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.436 19:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:22.436 19:02:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:24.970 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:24.971 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:24.971 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:24.971 Found net devices under 0000:84:00.0: cvl_0_0 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:24.971 Found net devices under 0000:84:00.1: cvl_0_1 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.971 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:25.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:10:25.229 00:10:25.229 --- 10.0.0.2 ping statistics --- 00:10:25.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.229 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:10:25.229 00:10:25.229 --- 10.0.0.1 ping statistics --- 00:10:25.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.229 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1583334 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1583334 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1583334 ']' 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.229 19:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.229 [2024-07-24 19:02:30.864748] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:10:25.229 [2024-07-24 19:02:30.864861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.229 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.487 [2024-07-24 19:02:30.958164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.487 [2024-07-24 19:02:31.111506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.487 [2024-07-24 19:02:31.111588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.487 [2024-07-24 19:02:31.111609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.487 [2024-07-24 19:02:31.111627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.487 [2024-07-24 19:02:31.111641] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.487 [2024-07-24 19:02:31.111727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.487 [2024-07-24 19:02:31.111791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.487 [2024-07-24 19:02:31.111866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.487 [2024-07-24 19:02:31.111871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.745 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.745 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:25.745 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.745 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:25.745 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.745 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.745 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:26.311 [2024-07-24 19:02:31.836673] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.311 19:02:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:26.569 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:26.569 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.134 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:27.134 19:02:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.391 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:27.391 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:27.960 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:27.960 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:28.560 19:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.817 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:28.817 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.075 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:29.075 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.333 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:29.333 19:02:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:29.898 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.156 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:30.156 19:02:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.721 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:30.721 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:30.978 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.235 [2024-07-24 19:02:36.855943] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.235 19:02:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:31.800 19:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:32.058 19:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.990 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:32.990 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:32.990 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.990 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:32.990 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:32.990 19:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:34.890 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:34.890 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:34.890 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.890 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:34.890 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.890 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:34.890 19:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:34.890 [global] 00:10:34.890 thread=1 00:10:34.890 invalidate=1 00:10:34.890 rw=write 00:10:34.890 time_based=1 00:10:34.890 runtime=1 00:10:34.890 ioengine=libaio 00:10:34.890 direct=1 00:10:34.890 bs=4096 00:10:34.890 iodepth=1 00:10:34.890 norandommap=0 00:10:34.890 numjobs=1 00:10:34.890 00:10:34.890 verify_dump=1 00:10:34.890 verify_backlog=512 00:10:34.890 verify_state_save=0 00:10:34.890 do_verify=1 00:10:34.890 verify=crc32c-intel 00:10:34.890 [job0] 00:10:34.890 filename=/dev/nvme0n1 00:10:34.890 [job1] 00:10:34.890 filename=/dev/nvme0n2 00:10:34.890 [job2] 00:10:34.890 filename=/dev/nvme0n3 00:10:34.890 [job3] 00:10:34.890 filename=/dev/nvme0n4 00:10:34.890 Could not set queue depth (nvme0n1) 00:10:34.890 Could not set queue depth (nvme0n2) 00:10:34.890 Could not set queue depth (nvme0n3) 00:10:34.890 Could not set queue depth (nvme0n4) 00:10:35.148 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.148 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.148 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.148 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.148 fio-3.35 00:10:35.148 Starting 4 threads 00:10:36.522 00:10:36.522 job0: (groupid=0, jobs=1): err= 0: pid=1584548: Wed Jul 24 19:02:41 2024 00:10:36.522 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:10:36.522 slat (nsec): min=14787, max=43448, avg=20430.09, stdev=7976.54 00:10:36.522 clat (usec): min=454, max=41660, avg=38458.75, stdev=9076.25 00:10:36.522 lat (usec): min=471, max=41675, avg=38479.18, stdev=9077.14 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[ 453], 5.00th=[25822], 10.00th=[40633], 20.00th=[40633], 00:10:36.522 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.522 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:36.522 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:36.522 | 99.99th=[41681] 00:10:36.522 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:36.522 slat (nsec): min=8642, max=57161, avg=16738.22, stdev=7423.43 00:10:36.522 clat (usec): min=203, max=565, avg=300.54, stdev=62.64 00:10:36.522 lat (usec): min=217, max=577, avg=317.28, stdev=62.86 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:10:36.522 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 293], 60.00th=[ 306], 00:10:36.522 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 392], 95.00th=[ 416], 00:10:36.522 | 99.00th=[ 469], 99.50th=[ 545], 99.90th=[ 570], 99.95th=[ 570], 00:10:36.522 | 99.99th=[ 570] 00:10:36.522 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.522 lat (usec) : 250=23.78%, 500=71.54%, 750=0.75% 00:10:36.522 lat (msec) : 50=3.93% 00:10:36.522 cpu : usr=0.10%, sys=0.99%, ctx=536, majf=0, minf=1 00:10:36.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.522 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.522 job1: (groupid=0, jobs=1): err= 0: pid=1584549: Wed Jul 24 19:02:41 2024 00:10:36.522 read: IOPS=19, BW=79.6KiB/s (81.5kB/s)(80.0KiB/1005msec) 00:10:36.522 slat (nsec): min=10919, max=50874, avg=25930.45, stdev=11529.33 00:10:36.522 clat (usec): min=40772, max=41247, avg=40987.02, stdev=96.90 00:10:36.522 lat (usec): min=40823, max=41258, avg=41012.95, stdev=89.34 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:36.522 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.522 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:36.522 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:36.522 | 99.99th=[41157] 00:10:36.522 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:10:36.522 slat (nsec): min=9805, max=47476, avg=14514.70, stdev=5464.26 00:10:36.522 clat (usec): min=206, max=1099, avg=340.88, stdev=88.62 00:10:36.522 lat (usec): min=216, max=1111, avg=355.40, stdev=89.76 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[ 215], 5.00th=[ 243], 10.00th=[ 262], 20.00th=[ 281], 00:10:36.522 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 338], 00:10:36.522 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 445], 95.00th=[ 486], 00:10:36.522 | 99.00th=[ 570], 99.50th=[ 963], 99.90th=[ 1106], 99.95th=[ 1106], 00:10:36.522 | 99.99th=[ 1106] 00:10:36.522 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.522 lat (usec) : 250=6.20%, 500=86.65%, 750=2.82%, 1000=0.19% 00:10:36.522 lat (msec) : 2=0.38%, 50=3.76% 00:10:36.522 cpu : usr=0.20%, sys=1.20%, ctx=533, majf=0, minf=1 00:10:36.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.522 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.522 job2: (groupid=0, jobs=1): err= 0: pid=1584550: Wed Jul 24 19:02:41 2024 00:10:36.522 read: IOPS=85, BW=341KiB/s (349kB/s)(344KiB/1008msec) 00:10:36.522 slat (nsec): min=7490, max=34530, avg=12342.37, stdev=6178.98 00:10:36.522 clat (usec): min=315, max=41105, avg=9859.59, stdev=17212.61 00:10:36.522 lat (usec): min=323, max=41121, avg=9871.93, stdev=17215.70 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[ 318], 5.00th=[ 343], 10.00th=[ 388], 20.00th=[ 404], 00:10:36.522 | 30.00th=[ 420], 40.00th=[ 441], 50.00th=[ 457], 60.00th=[ 482], 00:10:36.522 | 70.00th=[ 515], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:10:36.522 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:36.522 | 99.99th=[41157] 00:10:36.522 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:36.522 slat (nsec): min=9495, max=83500, avg=13669.63, stdev=6374.61 00:10:36.522 clat (usec): min=216, max=937, avg=293.22, stdev=49.71 00:10:36.522 lat (usec): min=228, max=949, avg=306.89, stdev=50.52 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 255], 00:10:36.522 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 302], 00:10:36.522 | 70.00th=[ 310], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 359], 00:10:36.522 | 99.00th=[ 392], 99.50th=[ 490], 99.90th=[ 938], 99.95th=[ 938], 00:10:36.522 | 99.99th=[ 938] 00:10:36.522 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.522 lat (usec) : 250=14.72%, 500=80.27%, 750=1.51%, 1000=0.17% 00:10:36.522 lat (msec) : 50=3.34% 00:10:36.522 cpu : usr=0.30%, sys=0.99%, ctx=599, majf=0, minf=1 00:10:36.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.522 issued rwts: total=86,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.522 job3: (groupid=0, jobs=1): err= 0: pid=1584551: Wed Jul 24 19:02:41 2024 00:10:36.522 read: IOPS=19, BW=79.2KiB/s (81.1kB/s)(80.0KiB/1010msec) 00:10:36.522 slat (nsec): min=8724, max=32297, avg=16902.05, stdev=5351.08 00:10:36.522 clat (usec): min=40530, max=41123, avg=40950.72, stdev=114.72 00:10:36.522 lat (usec): min=40539, max=41138, avg=40967.62, stdev=116.02 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:36.522 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.522 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:36.522 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:36.522 | 99.99th=[41157] 00:10:36.522 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:36.522 slat (nsec): min=8581, max=71001, avg=17055.66, stdev=9681.03 00:10:36.522 clat (usec): min=207, max=1100, avg=350.86, stdev=92.81 00:10:36.522 lat (usec): min=227, max=1114, avg=367.92, stdev=93.66 00:10:36.522 clat percentiles (usec): 00:10:36.522 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 262], 20.00th=[ 285], 00:10:36.522 | 30.00th=[ 297], 40.00th=[ 314], 50.00th=[ 334], 60.00th=[ 355], 00:10:36.522 | 70.00th=[ 379], 80.00th=[ 408], 90.00th=[ 461], 95.00th=[ 515], 00:10:36.522 | 99.00th=[ 570], 99.50th=[ 881], 99.90th=[ 1106], 99.95th=[ 1106], 00:10:36.522 | 99.99th=[ 1106] 00:10:36.522 bw ( KiB/s): min= 4096, max= 4096, per=50.55%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.522 lat (usec) : 250=6.77%, 500=83.83%, 750=4.89%, 1000=0.56% 00:10:36.522 lat (msec) : 2=0.19%, 50=3.76% 00:10:36.522 cpu : usr=0.89%, sys=0.50%, ctx=533, majf=0, minf=2 00:10:36.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.523 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.523 00:10:36.523 Run status group 0 (all jobs): 00:10:36.523 READ: bw=586KiB/s (600kB/s), 79.2KiB/s-341KiB/s (81.1kB/s-349kB/s), io=592KiB (606kB), run=1005-1011msec 00:10:36.523 WRITE: bw=8103KiB/s (8297kB/s), 2026KiB/s-2038KiB/s (2074kB/s-2087kB/s), io=8192KiB (8389kB), run=1005-1011msec 00:10:36.523 00:10:36.523 Disk stats (read/write): 00:10:36.523 nvme0n1: ios=69/512, merge=0/0, ticks=1019/154, in_queue=1173, util=96.39% 00:10:36.523 nvme0n2: ios=65/512, merge=0/0, ticks=867/169, in_queue=1036, util=96.63% 00:10:36.523 nvme0n3: ios=81/512, merge=0/0, ticks=644/148, in_queue=792, util=88.45% 00:10:36.523 nvme0n4: ios=15/512, merge=0/0, ticks=615/171, in_queue=786, util=89.40% 00:10:36.523 19:02:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:36.523 [global] 00:10:36.523 thread=1 00:10:36.523 invalidate=1 00:10:36.523 rw=randwrite 00:10:36.523 time_based=1 00:10:36.523 runtime=1 00:10:36.523 ioengine=libaio 00:10:36.523 direct=1 00:10:36.523 bs=4096 00:10:36.523 iodepth=1 00:10:36.523 norandommap=0 00:10:36.523 numjobs=1 00:10:36.523 00:10:36.523 verify_dump=1 00:10:36.523 verify_backlog=512 00:10:36.523 verify_state_save=0 00:10:36.523 do_verify=1 00:10:36.523 verify=crc32c-intel 00:10:36.523 [job0] 00:10:36.523 filename=/dev/nvme0n1 00:10:36.523 [job1] 00:10:36.523 filename=/dev/nvme0n2 00:10:36.523 [job2] 00:10:36.523 filename=/dev/nvme0n3 00:10:36.523 [job3] 00:10:36.523 filename=/dev/nvme0n4 00:10:36.523 Could not set queue depth (nvme0n1) 00:10:36.523 Could not set queue depth (nvme0n2) 00:10:36.523 Could not set queue depth (nvme0n3) 00:10:36.523 Could not set queue depth (nvme0n4) 00:10:36.523 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.523 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.523 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.523 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.523 fio-3.35 00:10:36.523 Starting 4 threads 00:10:37.895 00:10:37.895 job0: (groupid=0, jobs=1): err= 0: pid=1584784: Wed Jul 24 19:02:43 2024 00:10:37.895 read: IOPS=1370, BW=5483KiB/s (5614kB/s)(5488KiB/1001msec) 00:10:37.895 slat (nsec): min=7412, max=63699, avg=9991.06, stdev=2925.26 00:10:37.895 clat (usec): min=290, max=601, avg=395.54, stdev=64.47 00:10:37.895 lat (usec): min=299, max=614, avg=405.53, stdev=65.15 00:10:37.895 clat percentiles (usec): 00:10:37.895 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 334], 00:10:37.895 | 30.00th=[ 343], 40.00th=[ 359], 50.00th=[ 383], 60.00th=[ 412], 00:10:37.895 | 70.00th=[ 437], 80.00th=[ 461], 90.00th=[ 482], 95.00th=[ 506], 00:10:37.896 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 594], 99.95th=[ 603], 00:10:37.896 | 99.99th=[ 603] 00:10:37.896 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:37.896 slat (nsec): min=10156, max=61956, avg=13201.78, stdev=4411.17 00:10:37.896 clat (usec): min=187, max=785, avg=269.43, stdev=48.50 00:10:37.896 lat (usec): min=204, max=797, avg=282.63, stdev=49.58 00:10:37.896 clat percentiles (usec): 00:10:37.896 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 223], 00:10:37.896 | 30.00th=[ 239], 40.00th=[ 253], 50.00th=[ 269], 60.00th=[ 285], 00:10:37.896 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[ 338], 00:10:37.896 | 99.00th=[ 392], 99.50th=[ 416], 99.90th=[ 742], 99.95th=[ 783], 00:10:37.896 | 99.99th=[ 783] 00:10:37.896 bw ( KiB/s): min= 7728, max= 7728, per=55.95%, avg=7728.00, stdev= 0.00, samples=1 00:10:37.896 iops : min= 1932, max= 1932, avg=1932.00, stdev= 0.00, samples=1 00:10:37.896 lat (usec) : 250=20.05%, 500=76.86%, 750=3.06%, 1000=0.03% 00:10:37.896 cpu : usr=1.60%, sys=5.60%, ctx=2908, majf=0, minf=1 00:10:37.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 issued rwts: total=1372,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.896 job1: (groupid=0, jobs=1): err= 0: pid=1584792: Wed Jul 24 19:02:43 2024 00:10:37.896 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:10:37.896 slat (nsec): min=9833, max=17519, avg=15133.00, stdev=2071.42 00:10:37.896 clat (usec): min=378, max=42048, avg=39473.54, stdev=8744.64 00:10:37.896 lat (usec): min=392, max=42062, avg=39488.68, stdev=8744.72 00:10:37.896 clat percentiles (usec): 00:10:37.896 | 1.00th=[ 379], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:37.896 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:37.896 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:37.896 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.896 | 99.99th=[42206] 00:10:37.896 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:37.896 slat (nsec): min=10310, max=30422, avg=11874.34, stdev=2327.67 00:10:37.896 clat (usec): min=197, max=1390, avg=243.56, stdev=78.37 00:10:37.896 lat (usec): min=210, max=1401, avg=255.44, stdev=78.51 00:10:37.896 clat percentiles (usec): 00:10:37.896 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 221], 00:10:37.896 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 237], 00:10:37.896 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 293], 00:10:37.896 | 99.00th=[ 392], 99.50th=[ 963], 99.90th=[ 1385], 99.95th=[ 1385], 00:10:37.896 | 99.99th=[ 1385] 00:10:37.896 bw ( KiB/s): min= 4096, max= 4096, per=29.66%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.896 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.896 lat (usec) : 250=77.90%, 500=17.42%, 1000=0.56% 00:10:37.896 lat (msec) : 2=0.19%, 50=3.93% 00:10:37.896 cpu : usr=0.60%, sys=0.60%, ctx=535, majf=0, minf=2 00:10:37.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.896 job2: (groupid=0, jobs=1): err= 0: pid=1584817: Wed Jul 24 19:02:43 2024 00:10:37.896 read: IOPS=172, BW=690KiB/s (706kB/s)(716KiB/1038msec) 00:10:37.896 slat (nsec): min=7453, max=24500, avg=11506.44, stdev=3827.07 00:10:37.896 clat (usec): min=350, max=42048, avg=4799.17, stdev=12524.95 00:10:37.896 lat (usec): min=360, max=42064, avg=4810.67, stdev=12527.06 00:10:37.896 clat percentiles (usec): 00:10:37.896 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 437], 00:10:37.896 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 490], 00:10:37.896 | 70.00th=[ 523], 80.00th=[ 611], 90.00th=[40633], 95.00th=[41157], 00:10:37.896 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.896 | 99.99th=[42206] 00:10:37.896 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:37.896 slat (usec): min=9, max=1720, avg=15.04, stdev=75.59 00:10:37.896 clat (usec): min=246, max=462, avg=324.43, stdev=30.91 00:10:37.896 lat (usec): min=255, max=2093, avg=339.46, stdev=83.64 00:10:37.896 clat percentiles (usec): 00:10:37.896 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:10:37.896 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:10:37.896 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 379], 00:10:37.896 | 99.00th=[ 449], 99.50th=[ 457], 99.90th=[ 461], 99.95th=[ 461], 00:10:37.896 | 99.99th=[ 461] 00:10:37.896 bw ( KiB/s): min= 4096, max= 4096, per=29.66%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.896 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.896 lat (usec) : 250=0.14%, 500=90.30%, 750=5.35%, 1000=1.45% 00:10:37.896 lat (msec) : 50=2.75% 00:10:37.896 cpu : usr=0.39%, sys=1.06%, ctx=696, majf=0, minf=1 00:10:37.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 issued rwts: total=179,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.896 job3: (groupid=0, jobs=1): err= 0: pid=1584830: Wed Jul 24 19:02:43 2024 00:10:37.896 read: IOPS=587, BW=2350KiB/s (2406kB/s)(2432KiB/1035msec) 00:10:37.896 slat (nsec): min=6914, max=34257, avg=9369.80, stdev=3763.11 00:10:37.896 clat (usec): min=285, max=41238, avg=1116.11, stdev=5413.65 00:10:37.896 lat (usec): min=293, max=41252, avg=1125.48, stdev=5414.52 00:10:37.896 clat percentiles (usec): 00:10:37.896 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:10:37.896 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 363], 60.00th=[ 388], 00:10:37.896 | 70.00th=[ 420], 80.00th=[ 453], 90.00th=[ 494], 95.00th=[ 529], 00:10:37.896 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:37.896 | 99.99th=[41157] 00:10:37.896 write: IOPS=989, BW=3957KiB/s (4052kB/s)(4096KiB/1035msec); 0 zone resets 00:10:37.896 slat (nsec): min=8902, max=66789, avg=15326.20, stdev=6677.74 00:10:37.896 clat (usec): min=215, max=536, avg=320.80, stdev=38.49 00:10:37.896 lat (usec): min=225, max=560, avg=336.12, stdev=39.74 00:10:37.896 clat percentiles (usec): 00:10:37.896 | 1.00th=[ 233], 5.00th=[ 255], 10.00th=[ 273], 20.00th=[ 293], 00:10:37.896 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:10:37.896 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 388], 00:10:37.896 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 482], 99.95th=[ 537], 00:10:37.896 | 99.99th=[ 537] 00:10:37.896 bw ( KiB/s): min= 1256, max= 6936, per=29.66%, avg=4096.00, stdev=4016.37, samples=2 00:10:37.896 iops : min= 314, max= 1734, avg=1024.00, stdev=1004.09, samples=2 00:10:37.896 lat (usec) : 250=2.63%, 500=94.18%, 750=2.51% 00:10:37.896 lat (msec) : 50=0.67% 00:10:37.896 cpu : usr=1.45%, sys=2.42%, ctx=1633, majf=0, minf=1 00:10:37.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.896 issued rwts: total=608,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.896 00:10:37.896 Run status group 0 (all jobs): 00:10:37.896 READ: bw=8405KiB/s (8606kB/s), 87.8KiB/s-5483KiB/s (89.9kB/s-5614kB/s), io=8724KiB (8933kB), run=1001-1038msec 00:10:37.896 WRITE: bw=13.5MiB/s (14.1MB/s), 1973KiB/s-6138KiB/s (2020kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1038msec 00:10:37.896 00:10:37.896 Disk stats (read/write): 00:10:37.896 nvme0n1: ios=1074/1375, merge=0/0, ticks=433/372, in_queue=805, util=85.57% 00:10:37.896 nvme0n2: ios=41/512, merge=0/0, ticks=1658/124, in_queue=1782, util=98.47% 00:10:37.896 nvme0n3: ios=231/512, merge=0/0, ticks=1023/162, in_queue=1185, util=96.18% 00:10:37.896 nvme0n4: ios=628/1024, merge=0/0, ticks=1412/311, in_queue=1723, util=96.14% 00:10:37.896 19:02:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:37.896 [global] 00:10:37.896 thread=1 00:10:37.896 invalidate=1 00:10:37.896 rw=write 00:10:37.896 time_based=1 00:10:37.896 runtime=1 00:10:37.896 ioengine=libaio 00:10:37.896 direct=1 00:10:37.896 bs=4096 00:10:37.896 iodepth=128 00:10:37.896 norandommap=0 00:10:37.896 numjobs=1 00:10:37.896 00:10:37.896 verify_dump=1 00:10:37.896 verify_backlog=512 00:10:37.896 verify_state_save=0 00:10:37.896 do_verify=1 00:10:37.896 verify=crc32c-intel 00:10:37.896 [job0] 00:10:37.896 filename=/dev/nvme0n1 00:10:37.896 [job1] 00:10:37.896 filename=/dev/nvme0n2 00:10:37.896 [job2] 00:10:37.896 filename=/dev/nvme0n3 00:10:37.896 [job3] 00:10:37.896 filename=/dev/nvme0n4 00:10:37.896 Could not set queue depth (nvme0n1) 00:10:37.896 Could not set queue depth (nvme0n2) 00:10:37.896 Could not set queue depth (nvme0n3) 00:10:37.896 Could not set queue depth (nvme0n4) 00:10:38.153 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.153 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.153 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.153 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:38.153 fio-3.35 00:10:38.153 Starting 4 threads 00:10:39.527 00:10:39.527 job0: (groupid=0, jobs=1): err= 0: pid=1585137: Wed Jul 24 19:02:44 2024 00:10:39.527 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:10:39.527 slat (usec): min=2, max=25706, avg=134.67, stdev=1211.50 00:10:39.527 clat (usec): min=5064, max=59651, avg=19692.07, stdev=8793.30 00:10:39.527 lat (usec): min=5101, max=59658, avg=19826.74, stdev=8883.16 00:10:39.527 clat percentiles (usec): 00:10:39.527 | 1.00th=[ 6652], 5.00th=[10290], 10.00th=[12518], 20.00th=[13435], 00:10:39.527 | 30.00th=[13566], 40.00th=[14353], 50.00th=[15533], 60.00th=[17171], 00:10:39.527 | 70.00th=[23987], 80.00th=[26870], 90.00th=[31589], 95.00th=[39584], 00:10:39.527 | 99.00th=[45876], 99.50th=[45876], 99.90th=[50594], 99.95th=[59507], 00:10:39.527 | 99.99th=[59507] 00:10:39.527 write: IOPS=3650, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1011msec); 0 zone resets 00:10:39.527 slat (usec): min=3, max=19937, avg=111.30, stdev=1003.88 00:10:39.527 clat (usec): min=346, max=69187, avg=15641.38, stdev=8542.59 00:10:39.527 lat (usec): min=809, max=69193, avg=15752.68, stdev=8641.88 00:10:39.527 clat percentiles (usec): 00:10:39.527 | 1.00th=[ 2245], 5.00th=[ 5276], 10.00th=[ 6915], 20.00th=[ 9765], 00:10:39.527 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13173], 60.00th=[14222], 00:10:39.527 | 70.00th=[15270], 80.00th=[22152], 90.00th=[27132], 95.00th=[29230], 00:10:39.527 | 99.00th=[45876], 99.50th=[52167], 99.90th=[63177], 99.95th=[63177], 00:10:39.527 | 99.99th=[68682] 00:10:39.527 bw ( KiB/s): min=12416, max=16384, per=30.97%, avg=14400.00, stdev=2805.80, samples=2 00:10:39.527 iops : min= 3104, max= 4096, avg=3600.00, stdev=701.45, samples=2 00:10:39.527 lat (usec) : 500=0.01%, 1000=0.04% 00:10:39.527 lat (msec) : 2=0.27%, 4=1.48%, 10=10.78%, 20=58.35%, 50=28.73% 00:10:39.527 lat (msec) : 100=0.33% 00:10:39.527 cpu : usr=2.67%, sys=3.66%, ctx=228, majf=0, minf=1 00:10:39.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:39.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.527 issued rwts: total=3584,3691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.527 job1: (groupid=0, jobs=1): err= 0: pid=1585138: Wed Jul 24 19:02:44 2024 00:10:39.527 read: IOPS=2840, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1003msec) 00:10:39.527 slat (usec): min=3, max=29829, avg=168.58, stdev=1257.98 00:10:39.527 clat (usec): min=2070, max=72365, avg=20546.87, stdev=13717.12 00:10:39.527 lat (usec): min=2078, max=72387, avg=20715.46, stdev=13843.34 00:10:39.527 clat percentiles (usec): 00:10:39.527 | 1.00th=[ 4228], 5.00th=[ 9634], 10.00th=[10945], 20.00th=[12256], 00:10:39.527 | 30.00th=[12387], 40.00th=[12518], 50.00th=[13435], 60.00th=[14615], 00:10:39.527 | 70.00th=[16581], 80.00th=[36439], 90.00th=[44303], 95.00th=[49546], 00:10:39.527 | 99.00th=[57934], 99.50th=[57934], 99.90th=[66323], 99.95th=[66847], 00:10:39.527 | 99.99th=[72877] 00:10:39.527 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:39.527 slat (usec): min=5, max=38463, avg=161.06, stdev=1193.69 00:10:39.527 clat (usec): min=7296, max=87203, avg=21809.90, stdev=17372.19 00:10:39.527 lat (usec): min=7307, max=87229, avg=21970.96, stdev=17492.47 00:10:39.527 clat percentiles (usec): 00:10:39.527 | 1.00th=[ 8848], 5.00th=[11338], 10.00th=[11994], 20.00th=[12649], 00:10:39.527 | 30.00th=[13042], 40.00th=[13173], 50.00th=[14091], 60.00th=[15008], 00:10:39.527 | 70.00th=[16057], 80.00th=[30540], 90.00th=[49546], 95.00th=[62653], 00:10:39.527 | 99.00th=[81265], 99.50th=[82314], 99.90th=[86508], 99.95th=[86508], 00:10:39.527 | 99.99th=[87557] 00:10:39.527 bw ( KiB/s): min= 8192, max=16384, per=26.43%, avg=12288.00, stdev=5792.62, samples=2 00:10:39.527 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:10:39.527 lat (msec) : 4=0.22%, 10=4.12%, 20=71.10%, 50=17.48%, 100=7.08% 00:10:39.527 cpu : usr=3.19%, sys=4.79%, ctx=312, majf=0, minf=1 00:10:39.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:39.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.527 issued rwts: total=2849,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.527 job2: (groupid=0, jobs=1): err= 0: pid=1585140: Wed Jul 24 19:02:44 2024 00:10:39.527 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:10:39.527 slat (usec): min=3, max=25812, avg=177.51, stdev=1426.91 00:10:39.527 clat (usec): min=4993, max=68631, avg=22480.82, stdev=9238.33 00:10:39.527 lat (usec): min=7757, max=68643, avg=22658.34, stdev=9374.73 00:10:39.527 clat percentiles (usec): 00:10:39.527 | 1.00th=[10028], 5.00th=[12518], 10.00th=[14877], 20.00th=[15664], 00:10:39.527 | 30.00th=[16450], 40.00th=[17695], 50.00th=[20841], 60.00th=[22938], 00:10:39.527 | 70.00th=[24773], 80.00th=[27395], 90.00th=[29492], 95.00th=[38011], 00:10:39.527 | 99.00th=[63701], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:10:39.527 | 99.99th=[68682] 00:10:39.527 write: IOPS=2147, BW=8591KiB/s (8797kB/s)(8668KiB/1009msec); 0 zone resets 00:10:39.527 slat (usec): min=4, max=25301, avg=286.39, stdev=1691.78 00:10:39.527 clat (msec): min=2, max=210, avg=37.71, stdev=41.67 00:10:39.527 lat (msec): min=2, max=210, avg=38.00, stdev=41.88 00:10:39.527 clat percentiles (msec): 00:10:39.527 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:10:39.527 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 23], 00:10:39.527 | 70.00th=[ 34], 80.00th=[ 61], 90.00th=[ 84], 95.00th=[ 140], 00:10:39.527 | 99.00th=[ 207], 99.50th=[ 209], 99.90th=[ 211], 99.95th=[ 211], 00:10:39.527 | 99.99th=[ 211] 00:10:39.527 bw ( KiB/s): min= 4144, max=12288, per=17.67%, avg=8216.00, stdev=5758.68, samples=2 00:10:39.527 iops : min= 1036, max= 3072, avg=2054.00, stdev=1439.67, samples=2 00:10:39.527 lat (msec) : 4=0.33%, 10=1.73%, 20=50.23%, 50=35.09%, 100=9.06% 00:10:39.527 lat (msec) : 250=3.56% 00:10:39.527 cpu : usr=1.98%, sys=3.57%, ctx=227, majf=0, minf=1 00:10:39.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:39.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.527 issued rwts: total=2048,2167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.527 job3: (groupid=0, jobs=1): err= 0: pid=1585141: Wed Jul 24 19:02:44 2024 00:10:39.527 read: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(10.0MiB/1020msec) 00:10:39.527 slat (usec): min=2, max=21596, avg=165.28, stdev=1295.35 00:10:39.527 clat (usec): min=7129, max=45926, avg=20689.77, stdev=6333.16 00:10:39.527 lat (usec): min=7134, max=45929, avg=20855.05, stdev=6427.34 00:10:39.527 clat percentiles (usec): 00:10:39.527 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[13698], 20.00th=[17171], 00:10:39.527 | 30.00th=[17433], 40.00th=[18482], 50.00th=[20055], 60.00th=[21627], 00:10:39.527 | 70.00th=[22676], 80.00th=[25297], 90.00th=[27395], 95.00th=[31065], 00:10:39.527 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:10:39.527 | 99.99th=[45876] 00:10:39.527 write: IOPS=2868, BW=11.2MiB/s (11.7MB/s)(11.4MiB/1020msec); 0 zone resets 00:10:39.527 slat (usec): min=4, max=25151, avg=192.96, stdev=1462.48 00:10:39.527 clat (msec): min=6, max=152, avg=26.13, stdev=23.01 00:10:39.527 lat (msec): min=6, max=152, avg=26.32, stdev=23.16 00:10:39.527 clat percentiles (msec): 00:10:39.527 | 1.00th=[ 8], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 17], 00:10:39.527 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 22], 00:10:39.527 | 70.00th=[ 24], 80.00th=[ 26], 90.00th=[ 41], 95.00th=[ 85], 00:10:39.527 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 153], 99.95th=[ 153], 00:10:39.527 | 99.99th=[ 153] 00:10:39.527 bw ( KiB/s): min= 9832, max=12552, per=24.07%, avg=11192.00, stdev=1923.33, samples=2 00:10:39.527 iops : min= 2458, max= 3138, avg=2798.00, stdev=480.83, samples=2 00:10:39.527 lat (msec) : 10=4.58%, 20=47.94%, 50=43.73%, 100=1.88%, 250=1.88% 00:10:39.527 cpu : usr=2.55%, sys=2.65%, ctx=215, majf=0, minf=1 00:10:39.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:39.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.527 issued rwts: total=2560,2926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.527 00:10:39.527 Run status group 0 (all jobs): 00:10:39.527 READ: bw=42.3MiB/s (44.3MB/s), 8119KiB/s-13.8MiB/s (8314kB/s-14.5MB/s), io=43.1MiB (45.2MB), run=1003-1020msec 00:10:39.527 WRITE: bw=45.4MiB/s (47.6MB/s), 8591KiB/s-14.3MiB/s (8797kB/s-15.0MB/s), io=46.3MiB (48.6MB), run=1003-1020msec 00:10:39.528 00:10:39.528 Disk stats (read/write): 00:10:39.528 nvme0n1: ios=3114/3079, merge=0/0, ticks=53799/37807, in_queue=91606, util=86.97% 00:10:39.528 nvme0n2: ios=1996/2048, merge=0/0, ticks=25265/25739, in_queue=51004, util=98.37% 00:10:39.528 nvme0n3: ios=1553/1895, merge=0/0, ticks=25912/53692, in_queue=79604, util=98.38% 00:10:39.528 nvme0n4: ios=2472/2560, merge=0/0, ticks=32653/32662, in_queue=65315, util=98.14% 00:10:39.528 19:02:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:39.528 [global] 00:10:39.528 thread=1 00:10:39.528 invalidate=1 00:10:39.528 rw=randwrite 00:10:39.528 time_based=1 00:10:39.528 runtime=1 00:10:39.528 ioengine=libaio 00:10:39.528 direct=1 00:10:39.528 bs=4096 00:10:39.528 iodepth=128 00:10:39.528 norandommap=0 00:10:39.528 numjobs=1 00:10:39.528 00:10:39.528 verify_dump=1 00:10:39.528 verify_backlog=512 00:10:39.528 verify_state_save=0 00:10:39.528 do_verify=1 00:10:39.528 verify=crc32c-intel 00:10:39.528 [job0] 00:10:39.528 filename=/dev/nvme0n1 00:10:39.528 [job1] 00:10:39.528 filename=/dev/nvme0n2 00:10:39.528 [job2] 00:10:39.528 filename=/dev/nvme0n3 00:10:39.528 [job3] 00:10:39.528 filename=/dev/nvme0n4 00:10:39.528 Could not set queue depth (nvme0n1) 00:10:39.528 Could not set queue depth (nvme0n2) 00:10:39.528 Could not set queue depth (nvme0n3) 00:10:39.528 Could not set queue depth (nvme0n4) 00:10:39.528 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.528 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.528 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.528 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.528 fio-3.35 00:10:39.528 Starting 4 threads 00:10:40.902 00:10:40.902 job0: (groupid=0, jobs=1): err= 0: pid=1585373: Wed Jul 24 19:02:46 2024 00:10:40.902 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:10:40.902 slat (usec): min=4, max=22457, avg=226.77, stdev=1348.69 00:10:40.902 clat (usec): min=19698, max=70675, avg=29956.28, stdev=7595.64 00:10:40.902 lat (usec): min=19707, max=77231, avg=30183.05, stdev=7732.88 00:10:40.902 clat percentiles (usec): 00:10:40.902 | 1.00th=[19792], 5.00th=[20317], 10.00th=[22676], 20.00th=[24511], 00:10:40.902 | 30.00th=[26870], 40.00th=[27395], 50.00th=[28967], 60.00th=[29492], 00:10:40.902 | 70.00th=[30540], 80.00th=[32113], 90.00th=[38011], 95.00th=[46400], 00:10:40.902 | 99.00th=[58459], 99.50th=[60556], 99.90th=[67634], 99.95th=[67634], 00:10:40.902 | 99.99th=[70779] 00:10:40.902 write: IOPS=2137, BW=8550KiB/s (8756kB/s)(8576KiB/1003msec); 0 zone resets 00:10:40.902 slat (usec): min=3, max=18867, avg=242.85, stdev=1389.49 00:10:40.902 clat (usec): min=875, max=84578, avg=30579.85, stdev=11473.86 00:10:40.902 lat (usec): min=6646, max=87006, avg=30822.69, stdev=11559.51 00:10:40.902 clat percentiles (usec): 00:10:40.902 | 1.00th=[ 6915], 5.00th=[19530], 10.00th=[21103], 20.00th=[24773], 00:10:40.902 | 30.00th=[26870], 40.00th=[27657], 50.00th=[28705], 60.00th=[29754], 00:10:40.902 | 70.00th=[31327], 80.00th=[32637], 90.00th=[37487], 95.00th=[58459], 00:10:40.902 | 99.00th=[76022], 99.50th=[77071], 99.90th=[84411], 99.95th=[84411], 00:10:40.902 | 99.99th=[84411] 00:10:40.903 bw ( KiB/s): min= 8192, max= 8192, per=17.87%, avg=8192.00, stdev= 0.00, samples=2 00:10:40.903 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:40.903 lat (usec) : 1000=0.02% 00:10:40.903 lat (msec) : 10=1.10%, 20=3.17%, 50=91.05%, 100=4.65% 00:10:40.903 cpu : usr=1.90%, sys=3.49%, ctx=183, majf=0, minf=1 00:10:40.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:40.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.903 issued rwts: total=2048,2144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.903 job1: (groupid=0, jobs=1): err= 0: pid=1585374: Wed Jul 24 19:02:46 2024 00:10:40.903 read: IOPS=1970, BW=7880KiB/s (8070kB/s)(7912KiB/1004msec) 00:10:40.903 slat (usec): min=4, max=24520, avg=242.09, stdev=1581.86 00:10:40.903 clat (usec): min=1586, max=69243, avg=30156.78, stdev=9558.04 00:10:40.903 lat (usec): min=8314, max=73169, avg=30398.88, stdev=9683.13 00:10:40.903 clat percentiles (usec): 00:10:40.903 | 1.00th=[14746], 5.00th=[17433], 10.00th=[18220], 20.00th=[24511], 00:10:40.903 | 30.00th=[26084], 40.00th=[27395], 50.00th=[28443], 60.00th=[28967], 00:10:40.903 | 70.00th=[30016], 80.00th=[33817], 90.00th=[44827], 95.00th=[50594], 00:10:40.903 | 99.00th=[62129], 99.50th=[62129], 99.90th=[65799], 99.95th=[69731], 00:10:40.903 | 99.99th=[69731] 00:10:40.903 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:10:40.903 slat (usec): min=5, max=42897, avg=246.41, stdev=1693.95 00:10:40.903 clat (usec): min=13059, max=75724, avg=32617.75, stdev=12457.63 00:10:40.903 lat (usec): min=13072, max=78465, avg=32864.16, stdev=12557.89 00:10:40.903 clat percentiles (usec): 00:10:40.903 | 1.00th=[13042], 5.00th=[14091], 10.00th=[21103], 20.00th=[26084], 00:10:40.903 | 30.00th=[27132], 40.00th=[27657], 50.00th=[30016], 60.00th=[32113], 00:10:40.903 | 70.00th=[33817], 80.00th=[35914], 90.00th=[54264], 95.00th=[62653], 00:10:40.903 | 99.00th=[70779], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:10:40.903 | 99.99th=[76022] 00:10:40.903 bw ( KiB/s): min= 8192, max= 8192, per=17.87%, avg=8192.00, stdev= 0.00, samples=2 00:10:40.903 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:40.903 lat (msec) : 2=0.02%, 10=0.25%, 20=9.19%, 50=80.70%, 100=9.84% 00:10:40.903 cpu : usr=2.29%, sys=3.39%, ctx=162, majf=0, minf=1 00:10:40.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:40.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.903 issued rwts: total=1978,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.903 job2: (groupid=0, jobs=1): err= 0: pid=1585375: Wed Jul 24 19:02:46 2024 00:10:40.903 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:10:40.903 slat (usec): min=3, max=26910, avg=163.28, stdev=1113.74 00:10:40.903 clat (usec): min=7317, max=90659, avg=22040.67, stdev=15002.71 00:10:40.903 lat (usec): min=7322, max=90665, avg=22203.95, stdev=15088.36 00:10:40.903 clat percentiles (usec): 00:10:40.903 | 1.00th=[10028], 5.00th=[14222], 10.00th=[15008], 20.00th=[15926], 00:10:40.903 | 30.00th=[16319], 40.00th=[16909], 50.00th=[17171], 60.00th=[17957], 00:10:40.903 | 70.00th=[19268], 80.00th=[20841], 90.00th=[28967], 95.00th=[60556], 00:10:40.903 | 99.00th=[85459], 99.50th=[90702], 99.90th=[90702], 99.95th=[90702], 00:10:40.903 | 99.99th=[90702] 00:10:40.903 write: IOPS=3348, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1002msec); 0 zone resets 00:10:40.903 slat (usec): min=4, max=13498, avg=141.09, stdev=912.78 00:10:40.903 clat (usec): min=577, max=53241, avg=17451.30, stdev=6232.93 00:10:40.903 lat (usec): min=4091, max=60779, avg=17592.39, stdev=6314.03 00:10:40.903 clat percentiles (usec): 00:10:40.903 | 1.00th=[ 4686], 5.00th=[10814], 10.00th=[13304], 20.00th=[14746], 00:10:40.903 | 30.00th=[15533], 40.00th=[16057], 50.00th=[16319], 60.00th=[16712], 00:10:40.903 | 70.00th=[17171], 80.00th=[18744], 90.00th=[23987], 95.00th=[27132], 00:10:40.903 | 99.00th=[47973], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:10:40.903 | 99.99th=[53216] 00:10:40.903 bw ( KiB/s): min=15592, max=15592, per=34.02%, avg=15592.00, stdev= 0.00, samples=1 00:10:40.903 iops : min= 3898, max= 3898, avg=3898.00, stdev= 0.00, samples=1 00:10:40.903 lat (usec) : 750=0.02% 00:10:40.903 lat (msec) : 10=2.55%, 20=75.79%, 50=18.25%, 100=3.39% 00:10:40.903 cpu : usr=2.20%, sys=4.90%, ctx=371, majf=0, minf=1 00:10:40.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:40.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.903 issued rwts: total=3072,3355,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.903 job3: (groupid=0, jobs=1): err= 0: pid=1585376: Wed Jul 24 19:02:46 2024 00:10:40.903 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:10:40.903 slat (usec): min=3, max=7324, avg=127.80, stdev=636.80 00:10:40.903 clat (usec): min=11601, max=21561, avg=16656.86, stdev=1746.02 00:10:40.903 lat (usec): min=11997, max=23398, avg=16784.67, stdev=1662.42 00:10:40.903 clat percentiles (usec): 00:10:40.903 | 1.00th=[12518], 5.00th=[13304], 10.00th=[14877], 20.00th=[15139], 00:10:40.903 | 30.00th=[15664], 40.00th=[16319], 50.00th=[16712], 60.00th=[16909], 00:10:40.903 | 70.00th=[17433], 80.00th=[18220], 90.00th=[19006], 95.00th=[19530], 00:10:40.903 | 99.00th=[20317], 99.50th=[20317], 99.90th=[21627], 99.95th=[21627], 00:10:40.903 | 99.99th=[21627] 00:10:40.903 write: IOPS=3941, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1004msec); 0 zone resets 00:10:40.903 slat (usec): min=5, max=12841, avg=129.89, stdev=751.56 00:10:40.903 clat (usec): min=451, max=32650, avg=17075.02, stdev=4874.77 00:10:40.903 lat (usec): min=1385, max=32658, avg=17204.91, stdev=4889.91 00:10:40.903 clat percentiles (usec): 00:10:40.903 | 1.00th=[ 4948], 5.00th=[12256], 10.00th=[12518], 20.00th=[13698], 00:10:40.903 | 30.00th=[14615], 40.00th=[15401], 50.00th=[15795], 60.00th=[16581], 00:10:40.903 | 70.00th=[17957], 80.00th=[19006], 90.00th=[26346], 95.00th=[28967], 00:10:40.903 | 99.00th=[30016], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:10:40.903 | 99.99th=[32637] 00:10:40.903 bw ( KiB/s): min=14248, max=16384, per=33.42%, avg=15316.00, stdev=1510.38, samples=2 00:10:40.903 iops : min= 3562, max= 4096, avg=3829.00, stdev=377.60, samples=2 00:10:40.903 lat (usec) : 500=0.01% 00:10:40.903 lat (msec) : 2=0.03%, 4=0.09%, 10=0.85%, 20=89.60%, 50=9.42% 00:10:40.903 cpu : usr=3.89%, sys=6.08%, ctx=372, majf=0, minf=1 00:10:40.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.903 issued rwts: total=3584,3957,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.903 00:10:40.903 Run status group 0 (all jobs): 00:10:40.903 READ: bw=41.6MiB/s (43.6MB/s), 7880KiB/s-13.9MiB/s (8070kB/s-14.6MB/s), io=41.7MiB (43.8MB), run=1002-1004msec 00:10:40.903 WRITE: bw=44.8MiB/s (46.9MB/s), 8159KiB/s-15.4MiB/s (8355kB/s-16.1MB/s), io=44.9MiB (47.1MB), run=1002-1004msec 00:10:40.903 00:10:40.903 Disk stats (read/write): 00:10:40.903 nvme0n1: ios=1572/1899, merge=0/0, ticks=22363/27514, in_queue=49877, util=89.08% 00:10:40.903 nvme0n2: ios=1576/1723, merge=0/0, ticks=24838/24723, in_queue=49561, util=98.37% 00:10:40.903 nvme0n3: ios=2592/2673, merge=0/0, ticks=23965/19029, in_queue=42994, util=96.71% 00:10:40.903 nvme0n4: ios=3094/3121, merge=0/0, ticks=13432/17597, in_queue=31029, util=97.75% 00:10:40.903 19:02:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:40.903 19:02:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1585508 00:10:40.903 19:02:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:40.903 19:02:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:40.903 [global] 00:10:40.903 thread=1 00:10:40.903 invalidate=1 00:10:40.903 rw=read 00:10:40.903 time_based=1 00:10:40.903 runtime=10 00:10:40.903 ioengine=libaio 00:10:40.903 direct=1 00:10:40.903 bs=4096 00:10:40.903 iodepth=1 00:10:40.903 norandommap=1 00:10:40.903 numjobs=1 00:10:40.903 00:10:40.903 [job0] 00:10:40.903 filename=/dev/nvme0n1 00:10:40.903 [job1] 00:10:40.903 filename=/dev/nvme0n2 00:10:40.903 [job2] 00:10:40.903 filename=/dev/nvme0n3 00:10:40.903 [job3] 00:10:40.903 filename=/dev/nvme0n4 00:10:40.903 Could not set queue depth (nvme0n1) 00:10:40.903 Could not set queue depth (nvme0n2) 00:10:40.903 Could not set queue depth (nvme0n3) 00:10:40.903 Could not set queue depth (nvme0n4) 00:10:41.160 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.160 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.161 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.161 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.161 fio-3.35 00:10:41.161 Starting 4 threads 00:10:44.460 19:02:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:44.460 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=26398720, buflen=4096 00:10:44.460 fio: pid=1585621, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:44.460 19:02:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:44.460 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=28852224, buflen=4096 00:10:44.460 fio: pid=1585614, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:44.460 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.460 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:44.736 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.736 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:44.993 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3272704, buflen=4096 00:10:44.993 fio: pid=1585600, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:45.251 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=32153600, buflen=4096 00:10:45.251 fio: pid=1585601, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:45.251 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.251 19:02:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:45.251 00:10:45.251 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1585600: Wed Jul 24 19:02:50 2024 00:10:45.251 read: IOPS=221, BW=886KiB/s (907kB/s)(3196KiB/3608msec) 00:10:45.251 slat (usec): min=7, max=12824, avg=42.06, stdev=592.99 00:10:45.251 clat (usec): min=296, max=47298, avg=4441.24, stdev=12083.40 00:10:45.251 lat (usec): min=307, max=53996, avg=4483.28, stdev=12189.51 00:10:45.251 clat percentiles (usec): 00:10:45.251 | 1.00th=[ 318], 5.00th=[ 367], 10.00th=[ 379], 20.00th=[ 396], 00:10:45.251 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 424], 60.00th=[ 441], 00:10:45.251 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 7373], 95.00th=[41157], 00:10:45.251 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:10:45.251 | 99.99th=[47449] 00:10:45.251 bw ( KiB/s): min= 91, max= 5776, per=4.04%, avg=910.14, stdev=2145.65, samples=7 00:10:45.251 iops : min= 22, max= 1444, avg=227.43, stdev=536.46, samples=7 00:10:45.251 lat (usec) : 500=69.88%, 750=18.88%, 1000=0.88% 00:10:45.251 lat (msec) : 2=0.12%, 4=0.12%, 10=0.12%, 20=0.12%, 50=9.75% 00:10:45.251 cpu : usr=0.11%, sys=0.36%, ctx=804, majf=0, minf=1 00:10:45.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.251 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.251 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.251 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1585601: Wed Jul 24 19:02:50 2024 00:10:45.251 read: IOPS=1999, BW=7996KiB/s (8188kB/s)(30.7MiB/3927msec) 00:10:45.251 slat (usec): min=7, max=23914, avg=15.52, stdev=303.70 00:10:45.251 clat (usec): min=263, max=41319, avg=482.09, stdev=2372.91 00:10:45.251 lat (usec): min=271, max=50996, avg=496.65, stdev=2412.41 00:10:45.251 clat percentiles (usec): 00:10:45.251 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 302], 00:10:45.251 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 330], 00:10:45.251 | 70.00th=[ 347], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 445], 00:10:45.251 | 99.00th=[ 570], 99.50th=[ 709], 99.90th=[41157], 99.95th=[41157], 00:10:45.251 | 99.99th=[41157] 00:10:45.251 bw ( KiB/s): min= 4049, max=12464, per=39.39%, avg=8882.43, stdev=3166.49, samples=7 00:10:45.251 iops : min= 1012, max= 3116, avg=2220.57, stdev=791.69, samples=7 00:10:45.251 lat (usec) : 500=98.15%, 750=1.34%, 1000=0.10% 00:10:45.251 lat (msec) : 2=0.05%, 50=0.34% 00:10:45.251 cpu : usr=1.38%, sys=3.23%, ctx=7858, majf=0, minf=1 00:10:45.251 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.251 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.251 issued rwts: total=7851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.251 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.251 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1585614: Wed Jul 24 19:02:50 2024 00:10:45.251 read: IOPS=2148, BW=8593KiB/s (8799kB/s)(27.5MiB/3279msec) 00:10:45.251 slat (nsec): min=6581, max=89949, avg=10651.00, stdev=3678.68 00:10:45.251 clat (usec): min=291, max=42070, avg=449.06, stdev=1748.62 00:10:45.251 lat (usec): min=298, max=42088, avg=459.71, stdev=1748.96 00:10:45.251 clat percentiles (usec): 00:10:45.251 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 330], 00:10:45.251 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 371], 00:10:45.251 | 70.00th=[ 396], 80.00th=[ 416], 90.00th=[ 449], 95.00th=[ 494], 00:10:45.251 | 99.00th=[ 553], 99.50th=[ 594], 99.90th=[41157], 99.95th=[41157], 00:10:45.251 | 99.99th=[42206] 00:10:45.251 bw ( KiB/s): min= 4168, max=11168, per=40.70%, avg=9178.67, stdev=2537.06, samples=6 00:10:45.251 iops : min= 1042, max= 2792, avg=2294.67, stdev=634.26, samples=6 00:10:45.251 lat (usec) : 500=95.74%, 750=3.99%, 1000=0.03% 00:10:45.251 lat (msec) : 2=0.03%, 4=0.01%, 50=0.18% 00:10:45.252 cpu : usr=1.53%, sys=2.96%, ctx=7050, majf=0, minf=1 00:10:45.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.252 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.252 issued rwts: total=7045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.252 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1585621: Wed Jul 24 19:02:50 2024 00:10:45.252 read: IOPS=2188, BW=8751KiB/s (8961kB/s)(25.2MiB/2946msec) 00:10:45.252 slat (nsec): min=6582, max=55107, avg=12351.99, stdev=4078.96 00:10:45.252 clat (usec): min=293, max=41485, avg=438.17, stdev=1013.83 00:10:45.252 lat (usec): min=301, max=41495, avg=450.52, stdev=1014.01 00:10:45.252 clat percentiles (usec): 00:10:45.252 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 343], 20.00th=[ 371], 00:10:45.252 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 412], 60.00th=[ 424], 00:10:45.252 | 70.00th=[ 433], 80.00th=[ 449], 90.00th=[ 474], 95.00th=[ 498], 00:10:45.252 | 99.00th=[ 570], 99.50th=[ 627], 99.90th=[ 1123], 99.95th=[40633], 00:10:45.252 | 99.99th=[41681] 00:10:45.252 bw ( KiB/s): min= 6912, max= 9432, per=37.97%, avg=8561.60, stdev=1106.61, samples=5 00:10:45.252 iops : min= 1728, max= 2358, avg=2140.40, stdev=276.65, samples=5 00:10:45.252 lat (usec) : 500=95.02%, 750=4.76%, 1000=0.05% 00:10:45.252 lat (msec) : 2=0.08%, 4=0.02%, 50=0.06% 00:10:45.252 cpu : usr=1.32%, sys=3.57%, ctx=6447, majf=0, minf=1 00:10:45.252 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.252 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.252 issued rwts: total=6446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.252 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.252 00:10:45.252 Run status group 0 (all jobs): 00:10:45.252 READ: bw=22.0MiB/s (23.1MB/s), 886KiB/s-8751KiB/s (907kB/s-8961kB/s), io=86.5MiB (90.7MB), run=2946-3927msec 00:10:45.252 00:10:45.252 Disk stats (read/write): 00:10:45.252 nvme0n1: ios=798/0, merge=0/0, ticks=3506/0, in_queue=3506, util=94.77% 00:10:45.252 nvme0n2: ios=7888/0, merge=0/0, ticks=3862/0, in_queue=3862, util=98.67% 00:10:45.252 nvme0n3: ios=6923/0, merge=0/0, ticks=3252/0, in_queue=3252, util=99.34% 00:10:45.252 nvme0n4: ios=6147/0, merge=0/0, ticks=2658/0, in_queue=2658, util=96.71% 00:10:45.510 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.510 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:46.076 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.076 19:02:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:46.642 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:46.642 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:47.211 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:47.211 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:47.468 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:47.468 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1585508 00:10:47.468 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:47.468 19:02:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:47.468 nvmf hotplug test: fio failed as expected 00:10:47.468 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.032 rmmod nvme_tcp 00:10:48.032 rmmod nvme_fabrics 00:10:48.032 rmmod nvme_keyring 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1583334 ']' 00:10:48.032 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1583334 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1583334 ']' 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1583334 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1583334 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1583334' 00:10:48.033 killing process with pid 1583334 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1583334 00:10:48.033 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1583334 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.599 19:02:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.500 00:10:50.500 real 0m28.127s 00:10:50.500 user 1m41.329s 00:10:50.500 sys 0m7.793s 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.500 ************************************ 00:10:50.500 END TEST nvmf_fio_target 00:10:50.500 ************************************ 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.500 ************************************ 00:10:50.500 START TEST nvmf_bdevio 00:10:50.500 ************************************ 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:50.500 * Looking for test storage... 00:10:50.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.500 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.501 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:50.501 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:50.501 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:50.501 19:02:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:53.784 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:53.784 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:53.784 Found net devices under 0000:84:00.0: cvl_0_0 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:53.784 Found net devices under 0000:84:00.1: cvl_0_1 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:53.784 19:02:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.784 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.784 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:10:53.785 00:10:53.785 --- 10.0.0.2 ping statistics --- 00:10:53.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.785 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:10:53.785 00:10:53.785 --- 10.0.0.1 ping statistics --- 00:10:53.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.785 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1588514 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1588514 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1588514 ']' 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.785 19:02:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:53.785 [2024-07-24 19:02:59.257881] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:10:53.785 [2024-07-24 19:02:59.257986] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.785 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.785 [2024-07-24 19:02:59.373988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.043 [2024-07-24 19:02:59.573721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.043 [2024-07-24 19:02:59.573811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.043 [2024-07-24 19:02:59.573833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.043 [2024-07-24 19:02:59.573852] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.043 [2024-07-24 19:02:59.573867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.043 [2024-07-24 19:02:59.573987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:54.043 [2024-07-24 19:02:59.574050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:54.043 [2024-07-24 19:02:59.574135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:54.043 [2024-07-24 19:02:59.574138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.976 [2024-07-24 19:03:00.389591] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.976 Malloc0 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.976 [2024-07-24 19:03:00.445642] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:54.976 { 00:10:54.976 "params": { 00:10:54.976 "name": "Nvme$subsystem", 00:10:54.976 "trtype": "$TEST_TRANSPORT", 00:10:54.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.976 "adrfam": "ipv4", 00:10:54.976 "trsvcid": "$NVMF_PORT", 00:10:54.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.976 "hdgst": ${hdgst:-false}, 00:10:54.976 "ddgst": ${ddgst:-false} 00:10:54.976 }, 00:10:54.976 "method": "bdev_nvme_attach_controller" 00:10:54.976 } 00:10:54.976 EOF 00:10:54.976 )") 00:10:54.976 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:54.977 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:54.977 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:54.977 19:03:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:54.977 "params": { 00:10:54.977 "name": "Nvme1", 00:10:54.977 "trtype": "tcp", 00:10:54.977 "traddr": "10.0.0.2", 00:10:54.977 "adrfam": "ipv4", 00:10:54.977 "trsvcid": "4420", 00:10:54.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.977 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.977 "hdgst": false, 00:10:54.977 "ddgst": false 00:10:54.977 }, 00:10:54.977 "method": "bdev_nvme_attach_controller" 00:10:54.977 }' 00:10:54.977 [2024-07-24 19:03:00.500134] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:10:54.977 [2024-07-24 19:03:00.500225] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1588703 ] 00:10:54.977 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.977 [2024-07-24 19:03:00.576373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.234 [2024-07-24 19:03:00.717006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.234 [2024-07-24 19:03:00.717078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.234 [2024-07-24 19:03:00.717083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.234 I/O targets: 00:10:55.234 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:55.234 00:10:55.234 00:10:55.234 CUnit - A unit testing framework for C - Version 2.1-3 00:10:55.234 http://cunit.sourceforge.net/ 00:10:55.234 00:10:55.234 00:10:55.234 Suite: bdevio tests on: Nvme1n1 00:10:55.492 Test: blockdev write read block ...passed 00:10:55.492 Test: blockdev write zeroes read block ...passed 00:10:55.492 Test: blockdev write zeroes read no split ...passed 00:10:55.492 Test: blockdev write zeroes read split ...passed 00:10:55.492 Test: blockdev write zeroes read split partial ...passed 00:10:55.492 Test: blockdev reset ...[2024-07-24 19:03:01.082086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:55.492 [2024-07-24 19:03:01.082209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243abd0 (9): Bad file descriptor 00:10:55.492 [2024-07-24 19:03:01.137819] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:55.492 passed 00:10:55.492 Test: blockdev write read 8 blocks ...passed 00:10:55.492 Test: blockdev write read size > 128k ...passed 00:10:55.492 Test: blockdev write read invalid size ...passed 00:10:55.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:55.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:55.750 Test: blockdev write read max offset ...passed 00:10:55.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:55.750 Test: blockdev writev readv 8 blocks ...passed 00:10:55.750 Test: blockdev writev readv 30 x 1block ...passed 00:10:55.750 Test: blockdev writev readv block ...passed 00:10:55.750 Test: blockdev writev readv size > 128k ...passed 00:10:55.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:55.750 Test: blockdev comparev and writev ...[2024-07-24 19:03:01.394109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.394156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:55.750 [2024-07-24 19:03:01.394189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.394212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:55.750 [2024-07-24 19:03:01.394831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.394871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:55.750 [2024-07-24 19:03:01.394904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.394928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:55.750 [2024-07-24 19:03:01.395503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.395536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:55.750 [2024-07-24 19:03:01.395564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.395585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:55.750 [2024-07-24 19:03:01.396168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.396199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:55.750 [2024-07-24 19:03:01.396227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:55.750 [2024-07-24 19:03:01.396248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:55.750 passed 00:10:56.008 Test: blockdev nvme passthru rw ...passed 00:10:56.008 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:03:01.479814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.008 [2024-07-24 19:03:01.479852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:56.008 [2024-07-24 19:03:01.480100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.008 [2024-07-24 19:03:01.480131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:56.008 [2024-07-24 19:03:01.480350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.008 [2024-07-24 19:03:01.480387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:56.008 [2024-07-24 19:03:01.480610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:56.008 [2024-07-24 19:03:01.480641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:56.008 passed 00:10:56.008 Test: blockdev nvme admin passthru ...passed 00:10:56.008 Test: blockdev copy ...passed 00:10:56.008 00:10:56.008 Run Summary: Type Total Ran Passed Failed Inactive 00:10:56.008 suites 1 1 n/a 0 0 00:10:56.008 tests 23 23 23 0 0 00:10:56.008 asserts 152 152 152 0 n/a 00:10:56.008 00:10:56.008 Elapsed time = 1.229 seconds 00:10:56.265 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:56.265 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.265 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:56.265 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.265 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:56.265 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.266 rmmod nvme_tcp 00:10:56.266 rmmod nvme_fabrics 00:10:56.266 rmmod nvme_keyring 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1588514 ']' 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1588514 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1588514 ']' 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1588514 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1588514 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1588514' 00:10:56.266 killing process with pid 1588514 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1588514 00:10:56.266 19:03:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1588514 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.834 19:03:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.738 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.738 00:10:58.738 real 0m8.292s 00:10:58.738 user 0m14.165s 00:10:58.738 sys 0m2.986s 00:10:58.738 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.738 19:03:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:58.738 ************************************ 00:10:58.738 END TEST nvmf_bdevio 00:10:58.738 ************************************ 00:10:58.738 19:03:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:58.738 00:10:58.738 real 4m33.586s 00:10:58.738 user 11m40.676s 00:10:58.738 sys 1m22.443s 00:10:58.738 19:03:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.738 19:03:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.738 ************************************ 00:10:58.738 END TEST nvmf_target_core 00:10:58.738 ************************************ 00:10:58.997 19:03:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:58.997 19:03:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:58.997 19:03:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.997 19:03:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.997 ************************************ 00:10:58.997 START TEST nvmf_target_extra 00:10:58.997 ************************************ 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:58.997 * Looking for test storage... 00:10:58.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:58.997 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:58.998 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:58.998 19:03:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:58.998 19:03:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:58.998 19:03:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.998 19:03:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:58.998 ************************************ 00:10:58.998 START TEST nvmf_example 00:10:58.998 ************************************ 00:10:58.998 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:59.257 * Looking for test storage... 00:10:59.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.257 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.258 19:03:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:01.817 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:01.818 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:01.818 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:01.818 Found net devices under 0000:84:00.0: cvl_0_0 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:01.818 Found net devices under 0000:84:00.1: cvl_0_1 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.818 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:02.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:11:02.077 00:11:02.077 --- 10.0.0.2 ping statistics --- 00:11:02.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.077 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:11:02.077 00:11:02.077 --- 10.0.0.1 ping statistics --- 00:11:02.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.077 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1591053 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1591053 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1591053 ']' 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.077 19:03:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.077 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.449 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.450 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.450 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.450 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.450 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.450 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.450 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:03.450 19:03:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:03.450 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.647 Initializing NVMe Controllers 00:11:15.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:15.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:15.647 Initialization complete. Launching workers. 00:11:15.647 ======================================================== 00:11:15.647 Latency(us) 00:11:15.647 Device Information : IOPS MiB/s Average min max 00:11:15.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12508.08 48.86 5117.62 1218.32 16209.00 00:11:15.647 ======================================================== 00:11:15.647 Total : 12508.08 48.86 5117.62 1218.32 16209.00 00:11:15.647 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:15.647 rmmod nvme_tcp 00:11:15.647 rmmod nvme_fabrics 00:11:15.647 rmmod nvme_keyring 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1591053 ']' 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1591053 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1591053 ']' 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1591053 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1591053 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1591053' 00:11:15.647 killing process with pid 1591053 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1591053 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1591053 00:11:15.647 nvmf threads initialize successfully 00:11:15.647 bdev subsystem init successfully 00:11:15.647 created a nvmf target service 00:11:15.647 create targets's poll groups done 00:11:15.647 all subsystems of target started 00:11:15.647 nvmf target is running 00:11:15.647 all subsystems of target stopped 00:11:15.647 destroy targets's poll groups done 00:11:15.647 destroyed the nvmf target service 00:11:15.647 bdev subsystem finish successfully 00:11:15.647 nvmf threads destroy successfully 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.647 19:03:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.215 00:11:16.215 real 0m17.074s 00:11:16.215 user 0m46.163s 00:11:16.215 sys 0m4.093s 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.215 ************************************ 00:11:16.215 END TEST nvmf_example 00:11:16.215 ************************************ 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.215 ************************************ 00:11:16.215 START TEST nvmf_filesystem 00:11:16.215 ************************************ 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:16.215 * Looking for test storage... 00:11:16.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.215 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:16.216 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:16.217 #define SPDK_CONFIG_H 00:11:16.217 #define SPDK_CONFIG_APPS 1 00:11:16.217 #define SPDK_CONFIG_ARCH native 00:11:16.217 #undef SPDK_CONFIG_ASAN 00:11:16.217 #undef SPDK_CONFIG_AVAHI 00:11:16.217 #undef SPDK_CONFIG_CET 00:11:16.217 #define SPDK_CONFIG_COVERAGE 1 00:11:16.217 #define SPDK_CONFIG_CROSS_PREFIX 00:11:16.217 #undef SPDK_CONFIG_CRYPTO 00:11:16.217 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:16.217 #undef SPDK_CONFIG_CUSTOMOCF 00:11:16.217 #undef SPDK_CONFIG_DAOS 00:11:16.217 #define SPDK_CONFIG_DAOS_DIR 00:11:16.217 #define SPDK_CONFIG_DEBUG 1 00:11:16.217 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:16.217 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:16.217 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:16.217 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:16.217 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:16.217 #undef SPDK_CONFIG_DPDK_UADK 00:11:16.217 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:16.217 #define SPDK_CONFIG_EXAMPLES 1 00:11:16.217 #undef SPDK_CONFIG_FC 00:11:16.217 #define SPDK_CONFIG_FC_PATH 00:11:16.217 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:16.217 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:16.217 #undef SPDK_CONFIG_FUSE 00:11:16.217 #undef SPDK_CONFIG_FUZZER 00:11:16.217 #define SPDK_CONFIG_FUZZER_LIB 00:11:16.217 #undef SPDK_CONFIG_GOLANG 00:11:16.217 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:16.217 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:16.217 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:16.217 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:16.217 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:16.217 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:16.217 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:16.217 #define SPDK_CONFIG_IDXD 1 00:11:16.217 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:16.217 #undef SPDK_CONFIG_IPSEC_MB 00:11:16.217 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:16.217 #define SPDK_CONFIG_ISAL 1 00:11:16.217 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:16.217 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:16.217 #define SPDK_CONFIG_LIBDIR 00:11:16.217 #undef SPDK_CONFIG_LTO 00:11:16.217 #define SPDK_CONFIG_MAX_LCORES 128 00:11:16.217 #define SPDK_CONFIG_NVME_CUSE 1 00:11:16.217 #undef SPDK_CONFIG_OCF 00:11:16.217 #define SPDK_CONFIG_OCF_PATH 00:11:16.217 #define SPDK_CONFIG_OPENSSL_PATH 00:11:16.217 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:16.217 #define SPDK_CONFIG_PGO_DIR 00:11:16.217 #undef SPDK_CONFIG_PGO_USE 00:11:16.217 #define SPDK_CONFIG_PREFIX /usr/local 00:11:16.217 #undef SPDK_CONFIG_RAID5F 00:11:16.217 #undef SPDK_CONFIG_RBD 00:11:16.217 #define SPDK_CONFIG_RDMA 1 00:11:16.217 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:16.217 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:16.217 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:16.217 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:16.217 #define SPDK_CONFIG_SHARED 1 00:11:16.217 #undef SPDK_CONFIG_SMA 00:11:16.217 #define SPDK_CONFIG_TESTS 1 00:11:16.217 #undef SPDK_CONFIG_TSAN 00:11:16.217 #define SPDK_CONFIG_UBLK 1 00:11:16.217 #define SPDK_CONFIG_UBSAN 1 00:11:16.217 #undef SPDK_CONFIG_UNIT_TESTS 00:11:16.217 #undef SPDK_CONFIG_URING 00:11:16.217 #define SPDK_CONFIG_URING_PATH 00:11:16.217 #undef SPDK_CONFIG_URING_ZNS 00:11:16.217 #undef SPDK_CONFIG_USDT 00:11:16.217 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:16.217 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:16.217 #define SPDK_CONFIG_VFIO_USER 1 00:11:16.217 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:16.217 #define SPDK_CONFIG_VHOST 1 00:11:16.217 #define SPDK_CONFIG_VIRTIO 1 00:11:16.217 #undef SPDK_CONFIG_VTUNE 00:11:16.217 #define SPDK_CONFIG_VTUNE_DIR 00:11:16.217 #define SPDK_CONFIG_WERROR 1 00:11:16.217 #define SPDK_CONFIG_WPDK_DIR 00:11:16.217 #undef SPDK_CONFIG_XNVME 00:11:16.217 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:16.217 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:16.218 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:16.479 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1593366 ]] 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1593366 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:16.480 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.3XL1Jp 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3XL1Jp/tests/target /tmp/spdk.3XL1Jp 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=949354496 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4335075328 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=38642900992 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=45083295744 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6440394752 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22531727360 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=8994226176 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=9016659968 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22433792 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22540926976 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=720896 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4508323840 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4508327936 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:16.481 * Looking for test storage... 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=38642900992 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8654987264 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.481 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.482 19:03:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.482 19:03:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:19.014 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:19.014 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.014 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:19.015 Found net devices under 0000:84:00.0: cvl_0_0 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:19.015 Found net devices under 0000:84:00.1: cvl_0_1 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.015 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:11:19.274 00:11:19.274 --- 10.0.0.2 ping statistics --- 00:11:19.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.274 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:11:19.274 00:11:19.274 --- 10.0.0.1 ping statistics --- 00:11:19.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.274 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.274 ************************************ 00:11:19.274 START TEST nvmf_filesystem_no_in_capsule 00:11:19.274 ************************************ 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1595010 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1595010 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1595010 ']' 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.274 19:03:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.274 [2024-07-24 19:03:24.871374] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:11:19.274 [2024-07-24 19:03:24.871477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.274 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.532 [2024-07-24 19:03:24.976890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.532 [2024-07-24 19:03:25.177277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.532 [2024-07-24 19:03:25.177394] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.532 [2024-07-24 19:03:25.177442] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.532 [2024-07-24 19:03:25.177479] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.532 [2024-07-24 19:03:25.177521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.532 [2024-07-24 19:03:25.177603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.532 [2024-07-24 19:03:25.177665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.532 [2024-07-24 19:03:25.177723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.532 [2024-07-24 19:03:25.177727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.463 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.463 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 [2024-07-24 19:03:25.939799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.464 19:03:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 Malloc1 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 [2024-07-24 19:03:26.154770] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.464 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:20.721 { 00:11:20.721 "name": "Malloc1", 00:11:20.721 "aliases": [ 00:11:20.721 "adde56e8-6a85-435a-bd6f-362627b4a8c8" 00:11:20.721 ], 00:11:20.721 "product_name": "Malloc disk", 00:11:20.721 "block_size": 512, 00:11:20.721 "num_blocks": 1048576, 00:11:20.721 "uuid": "adde56e8-6a85-435a-bd6f-362627b4a8c8", 00:11:20.721 "assigned_rate_limits": { 00:11:20.721 "rw_ios_per_sec": 0, 00:11:20.721 "rw_mbytes_per_sec": 0, 00:11:20.721 "r_mbytes_per_sec": 0, 00:11:20.721 "w_mbytes_per_sec": 0 00:11:20.721 }, 00:11:20.721 "claimed": true, 00:11:20.721 "claim_type": "exclusive_write", 00:11:20.721 "zoned": false, 00:11:20.721 "supported_io_types": { 00:11:20.721 "read": true, 00:11:20.721 "write": true, 00:11:20.721 "unmap": true, 00:11:20.721 "flush": true, 00:11:20.721 "reset": true, 00:11:20.721 "nvme_admin": false, 00:11:20.721 "nvme_io": false, 00:11:20.721 "nvme_io_md": false, 00:11:20.721 "write_zeroes": true, 00:11:20.721 "zcopy": true, 00:11:20.721 "get_zone_info": false, 00:11:20.721 "zone_management": false, 00:11:20.721 "zone_append": false, 00:11:20.721 "compare": false, 00:11:20.721 "compare_and_write": false, 00:11:20.721 "abort": true, 00:11:20.721 "seek_hole": false, 00:11:20.721 "seek_data": false, 00:11:20.721 "copy": true, 00:11:20.721 "nvme_iov_md": false 00:11:20.721 }, 00:11:20.721 "memory_domains": [ 00:11:20.721 { 00:11:20.721 "dma_device_id": "system", 00:11:20.721 "dma_device_type": 1 00:11:20.721 }, 00:11:20.721 { 00:11:20.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.721 "dma_device_type": 2 00:11:20.721 } 00:11:20.721 ], 00:11:20.721 "driver_specific": {} 00:11:20.721 } 00:11:20.721 ]' 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:20.721 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.286 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.286 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:21.286 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.286 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:21.286 19:03:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:23.183 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:23.183 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:23.183 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.183 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:23.183 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:23.440 19:03:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:23.440 19:03:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:24.811 19:03:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:25.775 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.776 ************************************ 00:11:25.776 START TEST filesystem_ext4 00:11:25.776 ************************************ 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:25.776 19:03:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:25.776 mke2fs 1.46.5 (30-Dec-2021) 00:11:25.776 Discarding device blocks: 0/522240 done 00:11:25.776 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:25.776 Filesystem UUID: 7a6ebccb-15f7-4f38-babe-cc138ce73663 00:11:25.776 Superblock backups stored on blocks: 00:11:25.776 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:25.776 00:11:25.776 Allocating group tables: 0/64 done 00:11:25.776 Writing inode tables: 0/64 done 00:11:26.722 Creating journal (8192 blocks): done 00:11:27.545 Writing superblocks and filesystem accounting information: 0/64 done 00:11:27.545 00:11:27.545 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:27.545 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.111 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.111 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1595010 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.112 00:11:28.112 real 0m2.572s 00:11:28.112 user 0m0.013s 00:11:28.112 sys 0m0.060s 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:28.112 ************************************ 00:11:28.112 END TEST filesystem_ext4 00:11:28.112 ************************************ 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.112 ************************************ 00:11:28.112 START TEST filesystem_btrfs 00:11:28.112 ************************************ 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:28.112 19:03:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:28.370 btrfs-progs v6.6.2 00:11:28.370 See https://btrfs.readthedocs.io for more information. 00:11:28.370 00:11:28.370 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:28.370 NOTE: several default settings have changed in version 5.15, please make sure 00:11:28.370 this does not affect your deployments: 00:11:28.370 - DUP for metadata (-m dup) 00:11:28.370 - enabled no-holes (-O no-holes) 00:11:28.370 - enabled free-space-tree (-R free-space-tree) 00:11:28.370 00:11:28.370 Label: (null) 00:11:28.370 UUID: 53a379ac-f145-4947-8767-d6a1fef9c172 00:11:28.370 Node size: 16384 00:11:28.370 Sector size: 4096 00:11:28.370 Filesystem size: 510.00MiB 00:11:28.370 Block group profiles: 00:11:28.370 Data: single 8.00MiB 00:11:28.370 Metadata: DUP 32.00MiB 00:11:28.370 System: DUP 8.00MiB 00:11:28.370 SSD detected: yes 00:11:28.370 Zoned device: no 00:11:28.370 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:28.370 Runtime features: free-space-tree 00:11:28.370 Checksum: crc32c 00:11:28.370 Number of devices: 1 00:11:28.370 Devices: 00:11:28.370 ID SIZE PATH 00:11:28.370 1 510.00MiB /dev/nvme0n1p1 00:11:28.370 00:11:28.370 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:28.370 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:29.744 19:03:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1595010 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:29.744 00:11:29.744 real 0m1.317s 00:11:29.744 user 0m0.026s 00:11:29.744 sys 0m0.118s 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:29.744 ************************************ 00:11:29.744 END TEST filesystem_btrfs 00:11:29.744 ************************************ 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.744 ************************************ 00:11:29.744 START TEST filesystem_xfs 00:11:29.744 ************************************ 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:29.744 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:29.744 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:29.744 = sectsz=512 attr=2, projid32bit=1 00:11:29.744 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:29.744 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:29.745 data = bsize=4096 blocks=130560, imaxpct=25 00:11:29.745 = sunit=0 swidth=0 blks 00:11:29.745 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:29.745 log =internal log bsize=4096 blocks=16384, version=2 00:11:29.745 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:29.745 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:30.310 Discarding blocks...Done. 00:11:30.310 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:30.310 19:03:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1595010 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.836 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.836 00:11:32.836 real 0m3.287s 00:11:32.836 user 0m0.023s 00:11:32.836 sys 0m0.059s 00:11:32.837 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.837 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.837 ************************************ 00:11:32.837 END TEST filesystem_xfs 00:11:32.837 ************************************ 00:11:32.837 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:32.837 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:32.837 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1595010 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1595010 ']' 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1595010 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1595010 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1595010' 00:11:33.095 killing process with pid 1595010 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1595010 00:11:33.095 19:03:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1595010 00:11:33.661 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:33.661 00:11:33.661 real 0m14.521s 00:11:33.661 user 0m55.448s 00:11:33.661 sys 0m1.893s 00:11:33.661 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.661 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.661 ************************************ 00:11:33.661 END TEST nvmf_filesystem_no_in_capsule 00:11:33.661 ************************************ 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.919 ************************************ 00:11:33.919 START TEST nvmf_filesystem_in_capsule 00:11:33.919 ************************************ 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1596949 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1596949 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1596949 ']' 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.919 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.919 [2024-07-24 19:03:39.459722] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:11:33.919 [2024-07-24 19:03:39.459833] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.919 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.919 [2024-07-24 19:03:39.555113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.177 [2024-07-24 19:03:39.699790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.177 [2024-07-24 19:03:39.699858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.177 [2024-07-24 19:03:39.699878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.177 [2024-07-24 19:03:39.699895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.177 [2024-07-24 19:03:39.699909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.177 [2024-07-24 19:03:39.699977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.177 [2024-07-24 19:03:39.700012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.177 [2024-07-24 19:03:39.700071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.177 [2024-07-24 19:03:39.700075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.177 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.177 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:34.177 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.177 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.177 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.435 [2024-07-24 19:03:39.888820] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.435 19:03:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.435 Malloc1 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.435 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.436 [2024-07-24 19:03:40.096599] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:34.436 { 00:11:34.436 "name": "Malloc1", 00:11:34.436 "aliases": [ 00:11:34.436 "3d6d3b81-02c9-432d-ad33-7ac2d0d82a11" 00:11:34.436 ], 00:11:34.436 "product_name": "Malloc disk", 00:11:34.436 "block_size": 512, 00:11:34.436 "num_blocks": 1048576, 00:11:34.436 "uuid": "3d6d3b81-02c9-432d-ad33-7ac2d0d82a11", 00:11:34.436 "assigned_rate_limits": { 00:11:34.436 "rw_ios_per_sec": 0, 00:11:34.436 "rw_mbytes_per_sec": 0, 00:11:34.436 "r_mbytes_per_sec": 0, 00:11:34.436 "w_mbytes_per_sec": 0 00:11:34.436 }, 00:11:34.436 "claimed": true, 00:11:34.436 "claim_type": "exclusive_write", 00:11:34.436 "zoned": false, 00:11:34.436 "supported_io_types": { 00:11:34.436 "read": true, 00:11:34.436 "write": true, 00:11:34.436 "unmap": true, 00:11:34.436 "flush": true, 00:11:34.436 "reset": true, 00:11:34.436 "nvme_admin": false, 00:11:34.436 "nvme_io": false, 00:11:34.436 "nvme_io_md": false, 00:11:34.436 "write_zeroes": true, 00:11:34.436 "zcopy": true, 00:11:34.436 "get_zone_info": false, 00:11:34.436 "zone_management": false, 00:11:34.436 "zone_append": false, 00:11:34.436 "compare": false, 00:11:34.436 "compare_and_write": false, 00:11:34.436 "abort": true, 00:11:34.436 "seek_hole": false, 00:11:34.436 "seek_data": false, 00:11:34.436 "copy": true, 00:11:34.436 "nvme_iov_md": false 00:11:34.436 }, 00:11:34.436 "memory_domains": [ 00:11:34.436 { 00:11:34.436 "dma_device_id": "system", 00:11:34.436 "dma_device_type": 1 00:11:34.436 }, 00:11:34.436 { 00:11:34.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.436 "dma_device_type": 2 00:11:34.436 } 00:11:34.436 ], 00:11:34.436 "driver_specific": {} 00:11:34.436 } 00:11:34.436 ]' 00:11:34.436 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:34.693 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:34.693 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:34.693 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:34.693 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:34.693 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:34.694 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:34.694 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.259 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.259 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.259 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.259 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:35.259 19:03:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:37.167 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:37.167 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:37.167 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:37.426 19:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:37.683 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:38.614 19:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.549 ************************************ 00:11:39.549 START TEST filesystem_in_capsule_ext4 00:11:39.549 ************************************ 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:39.549 19:03:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:39.549 mke2fs 1.46.5 (30-Dec-2021) 00:11:39.549 Discarding device blocks: 0/522240 done 00:11:39.549 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:39.549 Filesystem UUID: 69719197-e180-4368-b092-d077810b09c7 00:11:39.549 Superblock backups stored on blocks: 00:11:39.549 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:39.549 00:11:39.549 Allocating group tables: 0/64 done 00:11:39.549 Writing inode tables: 0/64 done 00:11:42.860 Creating journal (8192 blocks): done 00:11:42.860 Writing superblocks and filesystem accounting information: 0/64 done 00:11:42.860 00:11:42.860 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:42.860 19:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1596949 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.118 00:11:43.118 real 0m3.777s 00:11:43.118 user 0m0.022s 00:11:43.118 sys 0m0.060s 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:43.118 ************************************ 00:11:43.118 END TEST filesystem_in_capsule_ext4 00:11:43.118 ************************************ 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.118 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.377 ************************************ 00:11:43.377 START TEST filesystem_in_capsule_btrfs 00:11:43.377 ************************************ 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:43.377 19:03:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:43.377 btrfs-progs v6.6.2 00:11:43.377 See https://btrfs.readthedocs.io for more information. 00:11:43.377 00:11:43.377 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:43.377 NOTE: several default settings have changed in version 5.15, please make sure 00:11:43.377 this does not affect your deployments: 00:11:43.377 - DUP for metadata (-m dup) 00:11:43.377 - enabled no-holes (-O no-holes) 00:11:43.377 - enabled free-space-tree (-R free-space-tree) 00:11:43.377 00:11:43.377 Label: (null) 00:11:43.377 UUID: f1801f00-8257-4773-b221-4576aee45c37 00:11:43.377 Node size: 16384 00:11:43.377 Sector size: 4096 00:11:43.377 Filesystem size: 510.00MiB 00:11:43.377 Block group profiles: 00:11:43.377 Data: single 8.00MiB 00:11:43.377 Metadata: DUP 32.00MiB 00:11:43.377 System: DUP 8.00MiB 00:11:43.377 SSD detected: yes 00:11:43.377 Zoned device: no 00:11:43.377 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:43.377 Runtime features: free-space-tree 00:11:43.377 Checksum: crc32c 00:11:43.377 Number of devices: 1 00:11:43.377 Devices: 00:11:43.377 ID SIZE PATH 00:11:43.377 1 510.00MiB /dev/nvme0n1p1 00:11:43.377 00:11:43.377 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:43.377 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1596949 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.310 00:11:44.310 real 0m1.143s 00:11:44.310 user 0m0.015s 00:11:44.310 sys 0m0.143s 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.310 ************************************ 00:11:44.310 END TEST filesystem_in_capsule_btrfs 00:11:44.310 ************************************ 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.310 19:03:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.568 ************************************ 00:11:44.568 START TEST filesystem_in_capsule_xfs 00:11:44.568 ************************************ 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:44.568 19:03:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:44.568 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:44.568 = sectsz=512 attr=2, projid32bit=1 00:11:44.568 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:44.568 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:44.568 data = bsize=4096 blocks=130560, imaxpct=25 00:11:44.568 = sunit=0 swidth=0 blks 00:11:44.568 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:44.568 log =internal log bsize=4096 blocks=16384, version=2 00:11:44.568 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:44.568 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:45.502 Discarding blocks...Done. 00:11:45.502 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:45.503 19:03:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.400 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.400 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1596949 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.401 00:11:47.401 real 0m2.934s 00:11:47.401 user 0m0.022s 00:11:47.401 sys 0m0.065s 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.401 ************************************ 00:11:47.401 END TEST filesystem_in_capsule_xfs 00:11:47.401 ************************************ 00:11:47.401 19:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:47.401 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:47.401 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.659 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1596949 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1596949 ']' 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1596949 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1596949 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1596949' 00:11:47.659 killing process with pid 1596949 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1596949 00:11:47.659 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1596949 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:48.223 00:11:48.223 real 0m14.464s 00:11:48.223 user 0m55.252s 00:11:48.223 sys 0m2.015s 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.223 ************************************ 00:11:48.223 END TEST nvmf_filesystem_in_capsule 00:11:48.223 ************************************ 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.223 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.223 rmmod nvme_tcp 00:11:48.223 rmmod nvme_fabrics 00:11:48.482 rmmod nvme_keyring 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.482 19:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.382 00:11:50.382 real 0m34.244s 00:11:50.382 user 1m51.735s 00:11:50.382 sys 0m6.146s 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.382 ************************************ 00:11:50.382 END TEST nvmf_filesystem 00:11:50.382 ************************************ 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.382 ************************************ 00:11:50.382 START TEST nvmf_target_discovery 00:11:50.382 ************************************ 00:11:50.382 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:50.641 * Looking for test storage... 00:11:50.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.641 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.642 19:03:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:53.925 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:53.925 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:53.925 Found net devices under 0000:84:00.0: cvl_0_0 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:53.925 Found net devices under 0000:84:00.1: cvl_0_1 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:53.925 19:03:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:53.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:11:53.925 00:11:53.925 --- 10.0.0.2 ping statistics --- 00:11:53.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.925 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:11:53.925 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:11:53.925 00:11:53.925 --- 10.0.0.1 ping statistics --- 00:11:53.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.925 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1600839 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1600839 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1600839 ']' 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.926 19:03:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:53.926 [2024-07-24 19:03:59.263567] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:11:53.926 [2024-07-24 19:03:59.263681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.926 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.926 [2024-07-24 19:03:59.383698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.926 [2024-07-24 19:03:59.587807] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.926 [2024-07-24 19:03:59.587928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.926 [2024-07-24 19:03:59.587964] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.926 [2024-07-24 19:03:59.587994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.926 [2024-07-24 19:03:59.588021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.926 [2024-07-24 19:03:59.588191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.926 [2024-07-24 19:03:59.588252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.926 [2024-07-24 19:03:59.588335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.926 [2024-07-24 19:03:59.588342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 [2024-07-24 19:04:00.324875] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 Null1 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 [2024-07-24 19:04:00.370287] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 Null2 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 Null3 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 Null4 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.858 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:11:55.117 00:11:55.117 Discovery Log Number of Records 6, Generation counter 6 00:11:55.117 =====Discovery Log Entry 0====== 00:11:55.117 trtype: tcp 00:11:55.117 adrfam: ipv4 00:11:55.117 subtype: current discovery subsystem 00:11:55.117 treq: not required 00:11:55.117 portid: 0 00:11:55.117 trsvcid: 4420 00:11:55.117 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:55.117 traddr: 10.0.0.2 00:11:55.117 eflags: explicit discovery connections, duplicate discovery information 00:11:55.117 sectype: none 00:11:55.117 =====Discovery Log Entry 1====== 00:11:55.117 trtype: tcp 00:11:55.117 adrfam: ipv4 00:11:55.117 subtype: nvme subsystem 00:11:55.117 treq: not required 00:11:55.117 portid: 0 00:11:55.117 trsvcid: 4420 00:11:55.117 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:55.117 traddr: 10.0.0.2 00:11:55.117 eflags: none 00:11:55.117 sectype: none 00:11:55.117 =====Discovery Log Entry 2====== 00:11:55.117 trtype: tcp 00:11:55.117 adrfam: ipv4 00:11:55.117 subtype: nvme subsystem 00:11:55.117 treq: not required 00:11:55.117 portid: 0 00:11:55.117 trsvcid: 4420 00:11:55.117 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:55.117 traddr: 10.0.0.2 00:11:55.117 eflags: none 00:11:55.117 sectype: none 00:11:55.117 =====Discovery Log Entry 3====== 00:11:55.117 trtype: tcp 00:11:55.117 adrfam: ipv4 00:11:55.117 subtype: nvme subsystem 00:11:55.117 treq: not required 00:11:55.117 portid: 0 00:11:55.117 trsvcid: 4420 00:11:55.117 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:55.117 traddr: 10.0.0.2 00:11:55.117 eflags: none 00:11:55.117 sectype: none 00:11:55.117 =====Discovery Log Entry 4====== 00:11:55.117 trtype: tcp 00:11:55.117 adrfam: ipv4 00:11:55.117 subtype: nvme subsystem 00:11:55.117 treq: not required 00:11:55.117 portid: 0 00:11:55.117 trsvcid: 4420 00:11:55.117 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:55.117 traddr: 10.0.0.2 00:11:55.117 eflags: none 00:11:55.117 sectype: none 00:11:55.117 =====Discovery Log Entry 5====== 00:11:55.117 trtype: tcp 00:11:55.117 adrfam: ipv4 00:11:55.117 subtype: discovery subsystem referral 00:11:55.117 treq: not required 00:11:55.117 portid: 0 00:11:55.117 trsvcid: 4430 00:11:55.117 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:55.117 traddr: 10.0.0.2 00:11:55.117 eflags: none 00:11:55.117 sectype: none 00:11:55.117 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:55.117 Perform nvmf subsystem discovery via RPC 00:11:55.117 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:55.117 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.117 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.117 [ 00:11:55.117 { 00:11:55.117 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:55.117 "subtype": "Discovery", 00:11:55.117 "listen_addresses": [ 00:11:55.117 { 00:11:55.117 "trtype": "TCP", 00:11:55.117 "adrfam": "IPv4", 00:11:55.117 "traddr": "10.0.0.2", 00:11:55.117 "trsvcid": "4420" 00:11:55.117 } 00:11:55.117 ], 00:11:55.117 "allow_any_host": true, 00:11:55.117 "hosts": [] 00:11:55.117 }, 00:11:55.117 { 00:11:55.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.117 "subtype": "NVMe", 00:11:55.117 "listen_addresses": [ 00:11:55.117 { 00:11:55.117 "trtype": "TCP", 00:11:55.117 "adrfam": "IPv4", 00:11:55.117 "traddr": "10.0.0.2", 00:11:55.117 "trsvcid": "4420" 00:11:55.117 } 00:11:55.117 ], 00:11:55.117 "allow_any_host": true, 00:11:55.117 "hosts": [], 00:11:55.117 "serial_number": "SPDK00000000000001", 00:11:55.117 "model_number": "SPDK bdev Controller", 00:11:55.117 "max_namespaces": 32, 00:11:55.117 "min_cntlid": 1, 00:11:55.117 "max_cntlid": 65519, 00:11:55.117 "namespaces": [ 00:11:55.117 { 00:11:55.117 "nsid": 1, 00:11:55.117 "bdev_name": "Null1", 00:11:55.117 "name": "Null1", 00:11:55.117 "nguid": "A95659118F024D47B21D6282EE56CCFC", 00:11:55.117 "uuid": "a9565911-8f02-4d47-b21d-6282ee56ccfc" 00:11:55.117 } 00:11:55.117 ] 00:11:55.117 }, 00:11:55.117 { 00:11:55.117 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:55.117 "subtype": "NVMe", 00:11:55.117 "listen_addresses": [ 00:11:55.117 { 00:11:55.117 "trtype": "TCP", 00:11:55.117 "adrfam": "IPv4", 00:11:55.117 "traddr": "10.0.0.2", 00:11:55.117 "trsvcid": "4420" 00:11:55.117 } 00:11:55.117 ], 00:11:55.117 "allow_any_host": true, 00:11:55.117 "hosts": [], 00:11:55.117 "serial_number": "SPDK00000000000002", 00:11:55.117 "model_number": "SPDK bdev Controller", 00:11:55.117 "max_namespaces": 32, 00:11:55.117 "min_cntlid": 1, 00:11:55.117 "max_cntlid": 65519, 00:11:55.117 "namespaces": [ 00:11:55.117 { 00:11:55.117 "nsid": 1, 00:11:55.117 "bdev_name": "Null2", 00:11:55.117 "name": "Null2", 00:11:55.117 "nguid": "D57537F37224416B9A6AF72586F20F70", 00:11:55.117 "uuid": "d57537f3-7224-416b-9a6a-f72586f20f70" 00:11:55.117 } 00:11:55.117 ] 00:11:55.117 }, 00:11:55.117 { 00:11:55.117 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:55.117 "subtype": "NVMe", 00:11:55.117 "listen_addresses": [ 00:11:55.117 { 00:11:55.117 "trtype": "TCP", 00:11:55.117 "adrfam": "IPv4", 00:11:55.117 "traddr": "10.0.0.2", 00:11:55.117 "trsvcid": "4420" 00:11:55.117 } 00:11:55.117 ], 00:11:55.117 "allow_any_host": true, 00:11:55.117 "hosts": [], 00:11:55.117 "serial_number": "SPDK00000000000003", 00:11:55.117 "model_number": "SPDK bdev Controller", 00:11:55.117 "max_namespaces": 32, 00:11:55.117 "min_cntlid": 1, 00:11:55.117 "max_cntlid": 65519, 00:11:55.117 "namespaces": [ 00:11:55.117 { 00:11:55.117 "nsid": 1, 00:11:55.117 "bdev_name": "Null3", 00:11:55.117 "name": "Null3", 00:11:55.117 "nguid": "52FEA25519EB4B8986DCF2C63F1BAA21", 00:11:55.117 "uuid": "52fea255-19eb-4b89-86dc-f2c63f1baa21" 00:11:55.117 } 00:11:55.117 ] 00:11:55.117 }, 00:11:55.117 { 00:11:55.117 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:55.117 "subtype": "NVMe", 00:11:55.117 "listen_addresses": [ 00:11:55.117 { 00:11:55.117 "trtype": "TCP", 00:11:55.117 "adrfam": "IPv4", 00:11:55.117 "traddr": "10.0.0.2", 00:11:55.117 "trsvcid": "4420" 00:11:55.117 } 00:11:55.117 ], 00:11:55.117 "allow_any_host": true, 00:11:55.117 "hosts": [], 00:11:55.117 "serial_number": "SPDK00000000000004", 00:11:55.117 "model_number": "SPDK bdev Controller", 00:11:55.117 "max_namespaces": 32, 00:11:55.117 "min_cntlid": 1, 00:11:55.117 "max_cntlid": 65519, 00:11:55.117 "namespaces": [ 00:11:55.117 { 00:11:55.117 "nsid": 1, 00:11:55.117 "bdev_name": "Null4", 00:11:55.117 "name": "Null4", 00:11:55.117 "nguid": "4D9E81B40BEF480E955041DFE1DD4B38", 00:11:55.117 "uuid": "4d9e81b4-0bef-480e-9550-41dfe1dd4b38" 00:11:55.117 } 00:11:55.117 ] 00:11:55.117 } 00:11:55.117 ] 00:11:55.117 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.117 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.118 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.118 rmmod nvme_tcp 00:11:55.118 rmmod nvme_fabrics 00:11:55.376 rmmod nvme_keyring 00:11:55.376 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1600839 ']' 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1600839 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1600839 ']' 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1600839 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1600839 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1600839' 00:11:55.377 killing process with pid 1600839 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1600839 00:11:55.377 19:04:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1600839 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.635 19:04:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.163 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:58.163 00:11:58.163 real 0m7.276s 00:11:58.163 user 0m7.731s 00:11:58.163 sys 0m2.763s 00:11:58.163 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.163 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.163 ************************************ 00:11:58.163 END TEST nvmf_target_discovery 00:11:58.163 ************************************ 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.164 ************************************ 00:11:58.164 START TEST nvmf_referrals 00:11:58.164 ************************************ 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:58.164 * Looking for test storage... 00:11:58.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.164 19:04:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.725 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:00.726 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:00.726 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:00.726 Found net devices under 0000:84:00.0: cvl_0_0 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:00.726 Found net devices under 0000:84:00.1: cvl_0_1 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:00.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:12:00.726 00:12:00.726 --- 10.0.0.2 ping statistics --- 00:12:00.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.726 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:12:00.726 00:12:00.726 --- 10.0.0.1 ping statistics --- 00:12:00.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.726 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1603076 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1603076 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1603076 ']' 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:00.726 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:00.726 [2024-07-24 19:04:06.318942] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:12:00.726 [2024-07-24 19:04:06.319053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.726 EAL: No free 2048 kB hugepages reported on node 1 00:12:00.985 [2024-07-24 19:04:06.426076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.985 [2024-07-24 19:04:06.626169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.985 [2024-07-24 19:04:06.626275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.985 [2024-07-24 19:04:06.626310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.985 [2024-07-24 19:04:06.626340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.985 [2024-07-24 19:04:06.626366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.985 [2024-07-24 19:04:06.626494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.985 [2024-07-24 19:04:06.626558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.985 [2024-07-24 19:04:06.626617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.985 [2024-07-24 19:04:06.626622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.244 [2024-07-24 19:04:06.798641] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.244 [2024-07-24 19:04:06.811898] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.244 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.245 19:04:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.503 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:01.761 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.019 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:02.019 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:02.019 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:02.019 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:02.019 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:02.019 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.019 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:02.020 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:02.020 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:02.020 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:02.020 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:02.020 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.020 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.278 19:04:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.536 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:02.795 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:02.795 rmmod nvme_tcp 00:12:02.795 rmmod nvme_fabrics 00:12:03.053 rmmod nvme_keyring 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1603076 ']' 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1603076 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1603076 ']' 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1603076 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1603076 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1603076' 00:12:03.053 killing process with pid 1603076 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1603076 00:12:03.053 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1603076 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.312 19:04:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.842 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:05.842 00:12:05.842 real 0m7.644s 00:12:05.842 user 0m10.849s 00:12:05.842 sys 0m2.721s 00:12:05.842 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.842 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 ************************************ 00:12:05.842 END TEST nvmf_referrals 00:12:05.843 ************************************ 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.843 ************************************ 00:12:05.843 START TEST nvmf_connect_disconnect 00:12:05.843 ************************************ 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:05.843 * Looking for test storage... 00:12:05.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:05.843 19:04:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.378 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:08.379 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:08.379 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:08.379 Found net devices under 0000:84:00.0: cvl_0_0 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:08.379 Found net devices under 0000:84:00.1: cvl_0_1 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:12:08.379 00:12:08.379 --- 10.0.0.2 ping statistics --- 00:12:08.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.379 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:12:08.379 00:12:08.379 --- 10.0.0.1 ping statistics --- 00:12:08.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.379 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.379 19:04:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1605510 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1605510 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1605510 ']' 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.379 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.638 [2024-07-24 19:04:14.089394] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:12:08.638 [2024-07-24 19:04:14.089501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.638 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.638 [2024-07-24 19:04:14.192003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.897 [2024-07-24 19:04:14.360831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.897 [2024-07-24 19:04:14.360911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.897 [2024-07-24 19:04:14.360937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.897 [2024-07-24 19:04:14.360959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.897 [2024-07-24 19:04:14.360989] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.897 [2024-07-24 19:04:14.361087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.897 [2024-07-24 19:04:14.361147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.897 [2024-07-24 19:04:14.361224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.897 [2024-07-24 19:04:14.361229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.897 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.898 [2024-07-24 19:04:14.558851] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.898 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.898 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:08.898 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.898 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:09.156 [2024-07-24 19:04:14.625934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:09.156 19:04:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:11.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.329 rmmod nvme_tcp 00:12:23.329 rmmod nvme_fabrics 00:12:23.329 rmmod nvme_keyring 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.329 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1605510 ']' 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1605510 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1605510 ']' 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1605510 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1605510 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1605510' 00:12:23.330 killing process with pid 1605510 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1605510 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1605510 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.330 19:04:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.233 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.492 00:12:25.492 real 0m19.832s 00:12:25.492 user 0m57.589s 00:12:25.492 sys 0m3.800s 00:12:25.492 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.492 19:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:25.492 ************************************ 00:12:25.492 END TEST nvmf_connect_disconnect 00:12:25.492 ************************************ 00:12:25.492 19:04:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:25.492 19:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.492 19:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.492 19:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.492 ************************************ 00:12:25.492 START TEST nvmf_multitarget 00:12:25.492 ************************************ 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:25.492 * Looking for test storage... 00:12:25.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.492 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.493 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.493 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.493 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.493 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.493 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.493 19:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.774 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:28.775 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:28.775 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:28.775 Found net devices under 0000:84:00.0: cvl_0_0 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:28.775 Found net devices under 0000:84:00.1: cvl_0_1 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:12:28.775 00:12:28.775 --- 10.0.0.2 ping statistics --- 00:12:28.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.775 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:12:28.775 00:12:28.775 --- 10.0.0.1 ping statistics --- 00:12:28.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.775 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.775 19:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1609279 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1609279 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1609279 ']' 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.775 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.776 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.776 19:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:28.776 [2024-07-24 19:04:34.080256] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:12:28.776 [2024-07-24 19:04:34.080353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.776 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.776 [2024-07-24 19:04:34.195836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.776 [2024-07-24 19:04:34.406814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.776 [2024-07-24 19:04:34.406920] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.776 [2024-07-24 19:04:34.406956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.776 [2024-07-24 19:04:34.406996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.776 [2024-07-24 19:04:34.407024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.776 [2024-07-24 19:04:34.407193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.776 [2024-07-24 19:04:34.407255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.776 [2024-07-24 19:04:34.407338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.776 [2024-07-24 19:04:34.407345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:29.710 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:29.967 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:29.967 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:30.225 "nvmf_tgt_1" 00:12:30.225 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:30.225 "nvmf_tgt_2" 00:12:30.225 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:30.225 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:30.536 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:30.537 19:04:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:30.537 true 00:12:30.537 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:30.537 true 00:12:30.805 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.806 rmmod nvme_tcp 00:12:30.806 rmmod nvme_fabrics 00:12:30.806 rmmod nvme_keyring 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1609279 ']' 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1609279 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1609279 ']' 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1609279 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1609279 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1609279' 00:12:30.806 killing process with pid 1609279 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1609279 00:12:30.806 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1609279 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.375 19:04:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.278 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:33.278 00:12:33.278 real 0m7.967s 00:12:33.278 user 0m11.901s 00:12:33.278 sys 0m2.781s 00:12:33.278 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:33.278 19:04:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.278 ************************************ 00:12:33.278 END TEST nvmf_multitarget 00:12:33.278 ************************************ 00:12:33.537 19:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.537 19:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:33.537 19:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:33.537 19:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:33.537 ************************************ 00:12:33.537 START TEST nvmf_rpc 00:12:33.537 ************************************ 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:33.537 * Looking for test storage... 00:12:33.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:33.537 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.538 19:04:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:36.820 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:36.820 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:36.820 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:36.821 Found net devices under 0000:84:00.0: cvl_0_0 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:36.821 Found net devices under 0000:84:00.1: cvl_0_1 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.821 19:04:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:36.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:12:36.821 00:12:36.821 --- 10.0.0.2 ping statistics --- 00:12:36.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.821 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:12:36.821 00:12:36.821 --- 10.0.0.1 ping statistics --- 00:12:36.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.821 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1611651 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1611651 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1611651 ']' 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.821 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.821 [2024-07-24 19:04:42.256553] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:12:36.821 [2024-07-24 19:04:42.256658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.821 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.821 [2024-07-24 19:04:42.441483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.079 [2024-07-24 19:04:42.704450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.079 [2024-07-24 19:04:42.704572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.080 [2024-07-24 19:04:42.704613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.080 [2024-07-24 19:04:42.704656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.080 [2024-07-24 19:04:42.704684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.080 [2024-07-24 19:04:42.704873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.080 [2024-07-24 19:04:42.704934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.080 [2024-07-24 19:04:42.705040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.080 [2024-07-24 19:04:42.705064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.338 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:37.338 "tick_rate": 2700000000, 00:12:37.338 "poll_groups": [ 00:12:37.338 { 00:12:37.338 "name": "nvmf_tgt_poll_group_000", 00:12:37.338 "admin_qpairs": 0, 00:12:37.338 "io_qpairs": 0, 00:12:37.338 "current_admin_qpairs": 0, 00:12:37.338 "current_io_qpairs": 0, 00:12:37.338 "pending_bdev_io": 0, 00:12:37.338 "completed_nvme_io": 0, 00:12:37.338 "transports": [] 00:12:37.338 }, 00:12:37.338 { 00:12:37.338 "name": "nvmf_tgt_poll_group_001", 00:12:37.338 "admin_qpairs": 0, 00:12:37.338 "io_qpairs": 0, 00:12:37.338 "current_admin_qpairs": 0, 00:12:37.338 "current_io_qpairs": 0, 00:12:37.338 "pending_bdev_io": 0, 00:12:37.339 "completed_nvme_io": 0, 00:12:37.339 "transports": [] 00:12:37.339 }, 00:12:37.339 { 00:12:37.339 "name": "nvmf_tgt_poll_group_002", 00:12:37.339 "admin_qpairs": 0, 00:12:37.339 "io_qpairs": 0, 00:12:37.339 "current_admin_qpairs": 0, 00:12:37.339 "current_io_qpairs": 0, 00:12:37.339 "pending_bdev_io": 0, 00:12:37.339 "completed_nvme_io": 0, 00:12:37.339 "transports": [] 00:12:37.339 }, 00:12:37.339 { 00:12:37.339 "name": "nvmf_tgt_poll_group_003", 00:12:37.339 "admin_qpairs": 0, 00:12:37.339 "io_qpairs": 0, 00:12:37.339 "current_admin_qpairs": 0, 00:12:37.339 "current_io_qpairs": 0, 00:12:37.339 "pending_bdev_io": 0, 00:12:37.339 "completed_nvme_io": 0, 00:12:37.339 "transports": [] 00:12:37.339 } 00:12:37.339 ] 00:12:37.339 }' 00:12:37.339 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:37.339 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:37.339 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:37.339 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:37.339 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:37.339 19:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:37.339 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:37.339 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.339 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.339 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.597 [2024-07-24 19:04:43.035248] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:37.597 "tick_rate": 2700000000, 00:12:37.597 "poll_groups": [ 00:12:37.597 { 00:12:37.597 "name": "nvmf_tgt_poll_group_000", 00:12:37.597 "admin_qpairs": 0, 00:12:37.597 "io_qpairs": 0, 00:12:37.597 "current_admin_qpairs": 0, 00:12:37.597 "current_io_qpairs": 0, 00:12:37.597 "pending_bdev_io": 0, 00:12:37.597 "completed_nvme_io": 0, 00:12:37.597 "transports": [ 00:12:37.597 { 00:12:37.597 "trtype": "TCP" 00:12:37.597 } 00:12:37.597 ] 00:12:37.597 }, 00:12:37.597 { 00:12:37.597 "name": "nvmf_tgt_poll_group_001", 00:12:37.597 "admin_qpairs": 0, 00:12:37.597 "io_qpairs": 0, 00:12:37.597 "current_admin_qpairs": 0, 00:12:37.597 "current_io_qpairs": 0, 00:12:37.597 "pending_bdev_io": 0, 00:12:37.597 "completed_nvme_io": 0, 00:12:37.597 "transports": [ 00:12:37.597 { 00:12:37.597 "trtype": "TCP" 00:12:37.597 } 00:12:37.597 ] 00:12:37.597 }, 00:12:37.597 { 00:12:37.597 "name": "nvmf_tgt_poll_group_002", 00:12:37.597 "admin_qpairs": 0, 00:12:37.597 "io_qpairs": 0, 00:12:37.597 "current_admin_qpairs": 0, 00:12:37.597 "current_io_qpairs": 0, 00:12:37.597 "pending_bdev_io": 0, 00:12:37.597 "completed_nvme_io": 0, 00:12:37.597 "transports": [ 00:12:37.597 { 00:12:37.597 "trtype": "TCP" 00:12:37.597 } 00:12:37.597 ] 00:12:37.597 }, 00:12:37.597 { 00:12:37.597 "name": "nvmf_tgt_poll_group_003", 00:12:37.597 "admin_qpairs": 0, 00:12:37.597 "io_qpairs": 0, 00:12:37.597 "current_admin_qpairs": 0, 00:12:37.597 "current_io_qpairs": 0, 00:12:37.597 "pending_bdev_io": 0, 00:12:37.597 "completed_nvme_io": 0, 00:12:37.597 "transports": [ 00:12:37.597 { 00:12:37.597 "trtype": "TCP" 00:12:37.597 } 00:12:37.597 ] 00:12:37.597 } 00:12:37.597 ] 00:12:37.597 }' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.597 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.598 Malloc1 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.598 [2024-07-24 19:04:43.257809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.598 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:37.598 [2024-07-24 19:04:43.280360] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:37.855 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.855 could not add new controller: failed to write to nvme-fabrics device 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.855 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.419 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.419 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.419 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.419 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.419 19:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:40.316 19:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:40.316 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:40.316 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:40.574 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.832 [2024-07-24 19:04:46.275681] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:40.832 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.832 could not add new controller: failed to write to nvme-fabrics device 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.832 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.397 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.397 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.397 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.397 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.397 19:04:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.295 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:43.553 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:43.553 19:04:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.553 [2024-07-24 19:04:49.038933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.553 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.118 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.118 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.118 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.118 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:44.118 19:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.645 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.645 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.645 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.645 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.645 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.645 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.645 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.646 [2024-07-24 19:04:51.885916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.646 19:04:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.904 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.904 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.904 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.904 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.904 19:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 [2024-07-24 19:04:54.720725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.433 19:04:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.999 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.999 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.999 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.999 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.999 19:04:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.897 [2024-07-24 19:04:57.551420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.897 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.898 19:04:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.479 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.479 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.479 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.479 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:52.479 19:04:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.008 [2024-07-24 19:05:00.349517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.008 19:05:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.574 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.574 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.574 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.574 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.574 19:05:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.508 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 [2024-07-24 19:05:03.241786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.771 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 [2024-07-24 19:05:03.289807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 [2024-07-24 19:05:03.338011] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 [2024-07-24 19:05:03.386182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 [2024-07-24 19:05:03.434375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.772 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:58.033 "tick_rate": 2700000000, 00:12:58.033 "poll_groups": [ 00:12:58.033 { 00:12:58.033 "name": "nvmf_tgt_poll_group_000", 00:12:58.033 "admin_qpairs": 2, 00:12:58.033 "io_qpairs": 84, 00:12:58.033 "current_admin_qpairs": 0, 00:12:58.033 "current_io_qpairs": 0, 00:12:58.033 "pending_bdev_io": 0, 00:12:58.033 "completed_nvme_io": 101, 00:12:58.033 "transports": [ 00:12:58.033 { 00:12:58.033 "trtype": "TCP" 00:12:58.033 } 00:12:58.033 ] 00:12:58.033 }, 00:12:58.033 { 00:12:58.033 "name": "nvmf_tgt_poll_group_001", 00:12:58.033 "admin_qpairs": 2, 00:12:58.033 "io_qpairs": 84, 00:12:58.033 "current_admin_qpairs": 0, 00:12:58.033 "current_io_qpairs": 0, 00:12:58.033 "pending_bdev_io": 0, 00:12:58.033 "completed_nvme_io": 154, 00:12:58.033 "transports": [ 00:12:58.033 { 00:12:58.033 "trtype": "TCP" 00:12:58.033 } 00:12:58.033 ] 00:12:58.033 }, 00:12:58.033 { 00:12:58.033 "name": "nvmf_tgt_poll_group_002", 00:12:58.033 "admin_qpairs": 1, 00:12:58.033 "io_qpairs": 84, 00:12:58.033 "current_admin_qpairs": 0, 00:12:58.033 "current_io_qpairs": 0, 00:12:58.033 "pending_bdev_io": 0, 00:12:58.033 "completed_nvme_io": 259, 00:12:58.033 "transports": [ 00:12:58.033 { 00:12:58.033 "trtype": "TCP" 00:12:58.033 } 00:12:58.033 ] 00:12:58.033 }, 00:12:58.033 { 00:12:58.033 "name": "nvmf_tgt_poll_group_003", 00:12:58.033 "admin_qpairs": 2, 00:12:58.033 "io_qpairs": 84, 00:12:58.033 "current_admin_qpairs": 0, 00:12:58.033 "current_io_qpairs": 0, 00:12:58.033 "pending_bdev_io": 0, 00:12:58.033 "completed_nvme_io": 172, 00:12:58.033 "transports": [ 00:12:58.033 { 00:12:58.033 "trtype": "TCP" 00:12:58.033 } 00:12:58.033 ] 00:12:58.033 } 00:12:58.033 ] 00:12:58.033 }' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.033 rmmod nvme_tcp 00:12:58.033 rmmod nvme_fabrics 00:12:58.033 rmmod nvme_keyring 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1611651 ']' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1611651 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1611651 ']' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1611651 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1611651 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1611651' 00:12:58.033 killing process with pid 1611651 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1611651 00:12:58.033 19:05:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1611651 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.600 19:05:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.502 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:00.502 00:13:00.502 real 0m27.150s 00:13:00.502 user 1m25.532s 00:13:00.502 sys 0m4.828s 00:13:00.502 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.502 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.502 ************************************ 00:13:00.502 END TEST nvmf_rpc 00:13:00.502 ************************************ 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.762 ************************************ 00:13:00.762 START TEST nvmf_invalid 00:13:00.762 ************************************ 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:00.762 * Looking for test storage... 00:13:00.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:00.762 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:00.763 19:05:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.052 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:04.053 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:04.053 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:04.053 Found net devices under 0000:84:00.0: cvl_0_0 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:04.053 Found net devices under 0000:84:00.1: cvl_0_1 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.053 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:13:04.054 00:13:04.054 --- 10.0.0.2 ping statistics --- 00:13:04.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.054 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:13:04.054 00:13:04.054 --- 10.0.0.1 ping statistics --- 00:13:04.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.054 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1616280 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1616280 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1616280 ']' 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:04.054 19:05:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:04.054 [2024-07-24 19:05:09.417071] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:13:04.054 [2024-07-24 19:05:09.417237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.054 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.054 [2024-07-24 19:05:09.600552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:04.314 [2024-07-24 19:05:09.860953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.314 [2024-07-24 19:05:09.861029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.314 [2024-07-24 19:05:09.861057] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.314 [2024-07-24 19:05:09.861080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.314 [2024-07-24 19:05:09.861110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.314 [2024-07-24 19:05:09.861183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.314 [2024-07-24 19:05:09.861248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.314 [2024-07-24 19:05:09.861287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:04.314 [2024-07-24 19:05:09.861296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.881 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:04.881 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:04.881 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.881 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:04.881 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:05.139 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.139 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:05.139 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18751 00:13:05.397 [2024-07-24 19:05:10.907884] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:05.397 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:05.397 { 00:13:05.397 "nqn": "nqn.2016-06.io.spdk:cnode18751", 00:13:05.397 "tgt_name": "foobar", 00:13:05.397 "method": "nvmf_create_subsystem", 00:13:05.397 "req_id": 1 00:13:05.397 } 00:13:05.397 Got JSON-RPC error response 00:13:05.397 response: 00:13:05.397 { 00:13:05.397 "code": -32603, 00:13:05.397 "message": "Unable to find target foobar" 00:13:05.397 }' 00:13:05.397 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:05.397 { 00:13:05.397 "nqn": "nqn.2016-06.io.spdk:cnode18751", 00:13:05.397 "tgt_name": "foobar", 00:13:05.397 "method": "nvmf_create_subsystem", 00:13:05.397 "req_id": 1 00:13:05.397 } 00:13:05.397 Got JSON-RPC error response 00:13:05.397 response: 00:13:05.397 { 00:13:05.397 "code": -32603, 00:13:05.397 "message": "Unable to find target foobar" 00:13:05.397 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:05.397 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:05.397 19:05:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12986 00:13:05.964 [2024-07-24 19:05:11.369606] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12986: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:05.964 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:05.964 { 00:13:05.964 "nqn": "nqn.2016-06.io.spdk:cnode12986", 00:13:05.964 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:05.964 "method": "nvmf_create_subsystem", 00:13:05.964 "req_id": 1 00:13:05.964 } 00:13:05.964 Got JSON-RPC error response 00:13:05.964 response: 00:13:05.964 { 00:13:05.964 "code": -32602, 00:13:05.964 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:05.964 }' 00:13:05.964 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:05.964 { 00:13:05.964 "nqn": "nqn.2016-06.io.spdk:cnode12986", 00:13:05.964 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:05.964 "method": "nvmf_create_subsystem", 00:13:05.964 "req_id": 1 00:13:05.964 } 00:13:05.964 Got JSON-RPC error response 00:13:05.964 response: 00:13:05.964 { 00:13:05.964 "code": -32602, 00:13:05.964 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:05.964 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:05.964 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:05.964 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7650 00:13:06.223 [2024-07-24 19:05:11.710832] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7650: invalid model number 'SPDK_Controller' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:06.223 { 00:13:06.223 "nqn": "nqn.2016-06.io.spdk:cnode7650", 00:13:06.223 "model_number": "SPDK_Controller\u001f", 00:13:06.223 "method": "nvmf_create_subsystem", 00:13:06.223 "req_id": 1 00:13:06.223 } 00:13:06.223 Got JSON-RPC error response 00:13:06.223 response: 00:13:06.223 { 00:13:06.223 "code": -32602, 00:13:06.223 "message": "Invalid MN SPDK_Controller\u001f" 00:13:06.223 }' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:06.223 { 00:13:06.223 "nqn": "nqn.2016-06.io.spdk:cnode7650", 00:13:06.223 "model_number": "SPDK_Controller\u001f", 00:13:06.223 "method": "nvmf_create_subsystem", 00:13:06.223 "req_id": 1 00:13:06.223 } 00:13:06.223 Got JSON-RPC error response 00:13:06.223 response: 00:13:06.223 { 00:13:06.223 "code": -32602, 00:13:06.223 "message": "Invalid MN SPDK_Controller\u001f" 00:13:06.223 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.223 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'pBwvG`)vvlgv-#Fty|!yp' 00:13:06.224 19:05:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'pBwvG`)vvlgv-#Fty|!yp' nqn.2016-06.io.spdk:cnode27380 00:13:06.791 [2024-07-24 19:05:12.361118] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27380: invalid serial number 'pBwvG`)vvlgv-#Fty|!yp' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:06.791 { 00:13:06.791 "nqn": "nqn.2016-06.io.spdk:cnode27380", 00:13:06.791 "serial_number": "pBwvG`)vvlgv-#Fty|!yp", 00:13:06.791 "method": "nvmf_create_subsystem", 00:13:06.791 "req_id": 1 00:13:06.791 } 00:13:06.791 Got JSON-RPC error response 00:13:06.791 response: 00:13:06.791 { 00:13:06.791 "code": -32602, 00:13:06.791 "message": "Invalid SN pBwvG`)vvlgv-#Fty|!yp" 00:13:06.791 }' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:06.791 { 00:13:06.791 "nqn": "nqn.2016-06.io.spdk:cnode27380", 00:13:06.791 "serial_number": "pBwvG`)vvlgv-#Fty|!yp", 00:13:06.791 "method": "nvmf_create_subsystem", 00:13:06.791 "req_id": 1 00:13:06.791 } 00:13:06.791 Got JSON-RPC error response 00:13:06.791 response: 00:13:06.791 { 00:13:06.791 "code": -32602, 00:13:06.791 "message": "Invalid SN pBwvG`)vvlgv-#Fty|!yp" 00:13:06.791 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:06.791 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:06.792 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:07.051 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>zWW,:1s:vt|g@JLIQE[\qZ|Y/R*0S2q&S3h#qKY' 00:13:07.052 19:05:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '>zWW,:1s:vt|g@JLIQE[\qZ|Y/R*0S2q&S3h#qKY' nqn.2016-06.io.spdk:cnode22950 00:13:07.617 [2024-07-24 19:05:13.031592] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22950: invalid model number '>zWW,:1s:vt|g@JLIQE[\qZ|Y/R*0S2q&S3h#qKY' 00:13:07.617 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:07.617 { 00:13:07.617 "nqn": "nqn.2016-06.io.spdk:cnode22950", 00:13:07.617 "model_number": ">zWW,:1s:vt|g@JLIQ\u007fE[\\qZ|Y/R*0S2q&S3h#qKY", 00:13:07.617 "method": "nvmf_create_subsystem", 00:13:07.617 "req_id": 1 00:13:07.617 } 00:13:07.617 Got JSON-RPC error response 00:13:07.617 response: 00:13:07.617 { 00:13:07.617 "code": -32602, 00:13:07.617 "message": "Invalid MN >zWW,:1s:vt|g@JLIQ\u007fE[\\qZ|Y/R*0S2q&S3h#qKY" 00:13:07.617 }' 00:13:07.617 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:07.617 { 00:13:07.617 "nqn": "nqn.2016-06.io.spdk:cnode22950", 00:13:07.617 "model_number": ">zWW,:1s:vt|g@JLIQ\u007fE[\\qZ|Y/R*0S2q&S3h#qKY", 00:13:07.617 "method": "nvmf_create_subsystem", 00:13:07.617 "req_id": 1 00:13:07.617 } 00:13:07.617 Got JSON-RPC error response 00:13:07.617 response: 00:13:07.617 { 00:13:07.617 "code": -32602, 00:13:07.617 "message": "Invalid MN >zWW,:1s:vt|g@JLIQ\u007fE[\\qZ|Y/R*0S2q&S3h#qKY" 00:13:07.617 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:07.617 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:07.876 [2024-07-24 19:05:13.328694] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.876 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:08.133 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:08.133 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:08.133 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:08.133 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:08.133 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:08.391 [2024-07-24 19:05:13.952014] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:08.391 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:08.391 { 00:13:08.391 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:08.391 "listen_address": { 00:13:08.391 "trtype": "tcp", 00:13:08.391 "traddr": "", 00:13:08.391 "trsvcid": "4421" 00:13:08.391 }, 00:13:08.391 "method": "nvmf_subsystem_remove_listener", 00:13:08.391 "req_id": 1 00:13:08.391 } 00:13:08.391 Got JSON-RPC error response 00:13:08.391 response: 00:13:08.391 { 00:13:08.391 "code": -32602, 00:13:08.391 "message": "Invalid parameters" 00:13:08.391 }' 00:13:08.391 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:08.391 { 00:13:08.391 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:08.391 "listen_address": { 00:13:08.391 "trtype": "tcp", 00:13:08.391 "traddr": "", 00:13:08.391 "trsvcid": "4421" 00:13:08.391 }, 00:13:08.391 "method": "nvmf_subsystem_remove_listener", 00:13:08.391 "req_id": 1 00:13:08.391 } 00:13:08.391 Got JSON-RPC error response 00:13:08.391 response: 00:13:08.391 { 00:13:08.391 "code": -32602, 00:13:08.391 "message": "Invalid parameters" 00:13:08.391 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:08.391 19:05:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5878 -i 0 00:13:08.649 [2024-07-24 19:05:14.253006] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5878: invalid cntlid range [0-65519] 00:13:08.649 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:08.649 { 00:13:08.649 "nqn": "nqn.2016-06.io.spdk:cnode5878", 00:13:08.649 "min_cntlid": 0, 00:13:08.649 "method": "nvmf_create_subsystem", 00:13:08.649 "req_id": 1 00:13:08.649 } 00:13:08.649 Got JSON-RPC error response 00:13:08.649 response: 00:13:08.649 { 00:13:08.649 "code": -32602, 00:13:08.649 "message": "Invalid cntlid range [0-65519]" 00:13:08.649 }' 00:13:08.649 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:08.649 { 00:13:08.650 "nqn": "nqn.2016-06.io.spdk:cnode5878", 00:13:08.650 "min_cntlid": 0, 00:13:08.650 "method": "nvmf_create_subsystem", 00:13:08.650 "req_id": 1 00:13:08.650 } 00:13:08.650 Got JSON-RPC error response 00:13:08.650 response: 00:13:08.650 { 00:13:08.650 "code": -32602, 00:13:08.650 "message": "Invalid cntlid range [0-65519]" 00:13:08.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:08.650 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13198 -i 65520 00:13:09.216 [2024-07-24 19:05:14.670485] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13198: invalid cntlid range [65520-65519] 00:13:09.216 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:09.216 { 00:13:09.216 "nqn": "nqn.2016-06.io.spdk:cnode13198", 00:13:09.216 "min_cntlid": 65520, 00:13:09.216 "method": "nvmf_create_subsystem", 00:13:09.216 "req_id": 1 00:13:09.216 } 00:13:09.216 Got JSON-RPC error response 00:13:09.216 response: 00:13:09.216 { 00:13:09.216 "code": -32602, 00:13:09.216 "message": "Invalid cntlid range [65520-65519]" 00:13:09.216 }' 00:13:09.216 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:09.216 { 00:13:09.216 "nqn": "nqn.2016-06.io.spdk:cnode13198", 00:13:09.216 "min_cntlid": 65520, 00:13:09.216 "method": "nvmf_create_subsystem", 00:13:09.216 "req_id": 1 00:13:09.216 } 00:13:09.216 Got JSON-RPC error response 00:13:09.216 response: 00:13:09.216 { 00:13:09.216 "code": -32602, 00:13:09.216 "message": "Invalid cntlid range [65520-65519]" 00:13:09.216 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.216 19:05:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3240 -I 0 00:13:09.474 [2024-07-24 19:05:15.120143] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3240: invalid cntlid range [1-0] 00:13:09.474 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:09.474 { 00:13:09.474 "nqn": "nqn.2016-06.io.spdk:cnode3240", 00:13:09.474 "max_cntlid": 0, 00:13:09.474 "method": "nvmf_create_subsystem", 00:13:09.474 "req_id": 1 00:13:09.474 } 00:13:09.474 Got JSON-RPC error response 00:13:09.474 response: 00:13:09.474 { 00:13:09.474 "code": -32602, 00:13:09.474 "message": "Invalid cntlid range [1-0]" 00:13:09.474 }' 00:13:09.474 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:09.474 { 00:13:09.474 "nqn": "nqn.2016-06.io.spdk:cnode3240", 00:13:09.474 "max_cntlid": 0, 00:13:09.474 "method": "nvmf_create_subsystem", 00:13:09.474 "req_id": 1 00:13:09.474 } 00:13:09.474 Got JSON-RPC error response 00:13:09.474 response: 00:13:09.474 { 00:13:09.474 "code": -32602, 00:13:09.474 "message": "Invalid cntlid range [1-0]" 00:13:09.474 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:09.474 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16864 -I 65520 00:13:10.040 [2024-07-24 19:05:15.617926] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16864: invalid cntlid range [1-65520] 00:13:10.040 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:10.040 { 00:13:10.040 "nqn": "nqn.2016-06.io.spdk:cnode16864", 00:13:10.040 "max_cntlid": 65520, 00:13:10.040 "method": "nvmf_create_subsystem", 00:13:10.040 "req_id": 1 00:13:10.040 } 00:13:10.040 Got JSON-RPC error response 00:13:10.040 response: 00:13:10.040 { 00:13:10.040 "code": -32602, 00:13:10.040 "message": "Invalid cntlid range [1-65520]" 00:13:10.040 }' 00:13:10.040 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:10.040 { 00:13:10.040 "nqn": "nqn.2016-06.io.spdk:cnode16864", 00:13:10.040 "max_cntlid": 65520, 00:13:10.040 "method": "nvmf_create_subsystem", 00:13:10.041 "req_id": 1 00:13:10.041 } 00:13:10.041 Got JSON-RPC error response 00:13:10.041 response: 00:13:10.041 { 00:13:10.041 "code": -32602, 00:13:10.041 "message": "Invalid cntlid range [1-65520]" 00:13:10.041 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.041 19:05:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6536 -i 6 -I 5 00:13:10.607 [2024-07-24 19:05:16.087674] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6536: invalid cntlid range [6-5] 00:13:10.607 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:10.607 { 00:13:10.607 "nqn": "nqn.2016-06.io.spdk:cnode6536", 00:13:10.607 "min_cntlid": 6, 00:13:10.607 "max_cntlid": 5, 00:13:10.607 "method": "nvmf_create_subsystem", 00:13:10.607 "req_id": 1 00:13:10.607 } 00:13:10.607 Got JSON-RPC error response 00:13:10.607 response: 00:13:10.607 { 00:13:10.607 "code": -32602, 00:13:10.607 "message": "Invalid cntlid range [6-5]" 00:13:10.607 }' 00:13:10.607 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:10.607 { 00:13:10.607 "nqn": "nqn.2016-06.io.spdk:cnode6536", 00:13:10.607 "min_cntlid": 6, 00:13:10.607 "max_cntlid": 5, 00:13:10.607 "method": "nvmf_create_subsystem", 00:13:10.607 "req_id": 1 00:13:10.607 } 00:13:10.607 Got JSON-RPC error response 00:13:10.607 response: 00:13:10.607 { 00:13:10.607 "code": -32602, 00:13:10.607 "message": "Invalid cntlid range [6-5]" 00:13:10.607 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:10.607 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:10.607 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:10.607 { 00:13:10.607 "name": "foobar", 00:13:10.607 "method": "nvmf_delete_target", 00:13:10.607 "req_id": 1 00:13:10.607 } 00:13:10.607 Got JSON-RPC error response 00:13:10.607 response: 00:13:10.607 { 00:13:10.607 "code": -32602, 00:13:10.607 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:10.607 }' 00:13:10.607 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:10.607 { 00:13:10.607 "name": "foobar", 00:13:10.607 "method": "nvmf_delete_target", 00:13:10.607 "req_id": 1 00:13:10.607 } 00:13:10.607 Got JSON-RPC error response 00:13:10.607 response: 00:13:10.607 { 00:13:10.607 "code": -32602, 00:13:10.607 "message": "The specified target doesn't exist, cannot delete it." 00:13:10.607 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.608 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:10.608 rmmod nvme_tcp 00:13:10.872 rmmod nvme_fabrics 00:13:10.872 rmmod nvme_keyring 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1616280 ']' 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1616280 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1616280 ']' 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1616280 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1616280 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1616280' 00:13:10.872 killing process with pid 1616280 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1616280 00:13:10.872 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1616280 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.143 19:05:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.678 00:13:13.678 real 0m12.561s 00:13:13.678 user 0m33.273s 00:13:13.678 sys 0m3.553s 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:13.678 ************************************ 00:13:13.678 END TEST nvmf_invalid 00:13:13.678 ************************************ 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.678 ************************************ 00:13:13.678 START TEST nvmf_connect_stress 00:13:13.678 ************************************ 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:13.678 * Looking for test storage... 00:13:13.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.678 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.679 19:05:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:16.215 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:16.215 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:16.215 Found net devices under 0000:84:00.0: cvl_0_0 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:16.215 Found net devices under 0000:84:00.1: cvl_0_1 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:16.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:16.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:13:16.215 00:13:16.215 --- 10.0.0.2 ping statistics --- 00:13:16.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.215 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:16.215 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:16.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:16.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:13:16.215 00:13:16.216 --- 10.0.0.1 ping statistics --- 00:13:16.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:16.216 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:16.216 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1619321 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1619321 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1619321 ']' 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.473 19:05:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:16.473 [2024-07-24 19:05:21.971906] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:13:16.473 [2024-07-24 19:05:21.972006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:16.473 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.474 [2024-07-24 19:05:22.057210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:16.731 [2024-07-24 19:05:22.195794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:16.731 [2024-07-24 19:05:22.195858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:16.731 [2024-07-24 19:05:22.195878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:16.731 [2024-07-24 19:05:22.195894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:16.731 [2024-07-24 19:05:22.195909] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:16.731 [2024-07-24 19:05:22.196013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:16.731 [2024-07-24 19:05:22.196072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:16.731 [2024-07-24 19:05:22.196076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.666 [2024-07-24 19:05:23.116730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.666 [2024-07-24 19:05:23.157229] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.666 NULL1 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1619483 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.666 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:17.924 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.924 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:17.924 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:17.924 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.924 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.181 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.181 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:18.181 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.181 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.181 19:05:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:18.746 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.746 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:18.746 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:18.746 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.746 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.003 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.003 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:19.003 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.003 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.003 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.260 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.260 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:19.260 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.260 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.260 19:05:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:19.518 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.518 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:19.518 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:19.518 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.518 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.084 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.084 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:20.084 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.084 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.084 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.343 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.343 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:20.343 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.343 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.343 19:05:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.600 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.600 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:20.600 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.600 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.600 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:20.858 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.858 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:20.858 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:20.858 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.858 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.115 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.115 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:21.115 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.115 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.115 19:05:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.681 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.681 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:21.681 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.681 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.681 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:21.939 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.939 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:21.939 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:21.939 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.939 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.197 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.197 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:22.197 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.197 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.197 19:05:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.454 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.454 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:22.454 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.454 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.454 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.712 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.712 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:22.712 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.712 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.712 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.277 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.277 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:23.277 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.277 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.277 19:05:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.535 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.535 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:23.535 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.535 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.535 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.793 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.793 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:23.793 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.793 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.793 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.051 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:24.051 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.051 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.051 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.309 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:24.309 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.310 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.310 19:05:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.877 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.877 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:24.877 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.877 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.877 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.135 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.135 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:25.135 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.135 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.135 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.393 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.393 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:25.393 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.393 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.393 19:05:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.651 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.651 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:25.651 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.651 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.651 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.909 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.909 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:25.909 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.909 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.909 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.475 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.475 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:26.475 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.475 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.475 19:05:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.733 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.733 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:26.733 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.733 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.733 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.991 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.991 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:26.991 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.991 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.991 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.249 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.249 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:27.249 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.249 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.249 19:05:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.507 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.507 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:27.507 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.507 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.507 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.776 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1619483 00:13:28.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1619483) - No such process 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1619483 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:28.060 rmmod nvme_tcp 00:13:28.060 rmmod nvme_fabrics 00:13:28.060 rmmod nvme_keyring 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1619321 ']' 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1619321 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1619321 ']' 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1619321 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1619321 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1619321' 00:13:28.060 killing process with pid 1619321 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1619321 00:13:28.060 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1619321 00:13:28.327 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.327 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.327 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.328 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.328 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.328 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.328 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.328 19:05:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.862 00:13:30.862 real 0m17.135s 00:13:30.862 user 0m41.455s 00:13:30.862 sys 0m6.845s 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.862 ************************************ 00:13:30.862 END TEST nvmf_connect_stress 00:13:30.862 ************************************ 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:30.862 ************************************ 00:13:30.862 START TEST nvmf_fused_ordering 00:13:30.862 ************************************ 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:30.862 * Looking for test storage... 00:13:30.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.862 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.863 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.863 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.863 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.863 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.863 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.863 19:05:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.397 19:05:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:33.397 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:33.397 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:33.397 Found net devices under 0000:84:00.0: cvl_0_0 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:33.397 Found net devices under 0000:84:00.1: cvl_0_1 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.397 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.398 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.656 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.656 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.656 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.656 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.656 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:13:33.656 00:13:33.656 --- 10.0.0.2 ping statistics --- 00:13:33.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.657 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:13:33.657 00:13:33.657 --- 10.0.0.1 ping statistics --- 00:13:33.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.657 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1622771 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1622771 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1622771 ']' 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:33.657 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:33.657 [2024-07-24 19:05:39.256577] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:13:33.657 [2024-07-24 19:05:39.256664] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.657 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.657 [2024-07-24 19:05:39.342074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.915 [2024-07-24 19:05:39.485733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.915 [2024-07-24 19:05:39.485810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.915 [2024-07-24 19:05:39.485831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.915 [2024-07-24 19:05:39.485847] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.915 [2024-07-24 19:05:39.485862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.915 [2024-07-24 19:05:39.485902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:34.173 [2024-07-24 19:05:39.659496] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.173 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:34.174 [2024-07-24 19:05:39.675732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:34.174 NULL1 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.174 19:05:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:34.174 [2024-07-24 19:05:39.724668] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:13:34.174 [2024-07-24 19:05:39.724720] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622799 ] 00:13:34.174 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.739 Attached to nqn.2016-06.io.spdk:cnode1 00:13:34.739 Namespace ID: 1 size: 1GB 00:13:34.739 fused_ordering(0) 00:13:34.739 fused_ordering(1) 00:13:34.739 fused_ordering(2) 00:13:34.739 fused_ordering(3) 00:13:34.739 fused_ordering(4) 00:13:34.739 fused_ordering(5) 00:13:34.739 fused_ordering(6) 00:13:34.739 fused_ordering(7) 00:13:34.739 fused_ordering(8) 00:13:34.740 fused_ordering(9) 00:13:34.740 fused_ordering(10) 00:13:34.740 fused_ordering(11) 00:13:34.740 fused_ordering(12) 00:13:34.740 fused_ordering(13) 00:13:34.740 fused_ordering(14) 00:13:34.740 fused_ordering(15) 00:13:34.740 fused_ordering(16) 00:13:34.740 fused_ordering(17) 00:13:34.740 fused_ordering(18) 00:13:34.740 fused_ordering(19) 00:13:34.740 fused_ordering(20) 00:13:34.740 fused_ordering(21) 00:13:34.740 fused_ordering(22) 00:13:34.740 fused_ordering(23) 00:13:34.740 fused_ordering(24) 00:13:34.740 fused_ordering(25) 00:13:34.740 fused_ordering(26) 00:13:34.740 fused_ordering(27) 00:13:34.740 fused_ordering(28) 00:13:34.740 fused_ordering(29) 00:13:34.740 fused_ordering(30) 00:13:34.740 fused_ordering(31) 00:13:34.740 fused_ordering(32) 00:13:34.740 fused_ordering(33) 00:13:34.740 fused_ordering(34) 00:13:34.740 fused_ordering(35) 00:13:34.740 fused_ordering(36) 00:13:34.740 fused_ordering(37) 00:13:34.740 fused_ordering(38) 00:13:34.740 fused_ordering(39) 00:13:34.740 fused_ordering(40) 00:13:34.740 fused_ordering(41) 00:13:34.740 fused_ordering(42) 00:13:34.740 fused_ordering(43) 00:13:34.740 fused_ordering(44) 00:13:34.740 fused_ordering(45) 00:13:34.740 fused_ordering(46) 00:13:34.740 fused_ordering(47) 00:13:34.740 fused_ordering(48) 00:13:34.740 fused_ordering(49) 00:13:34.740 fused_ordering(50) 00:13:34.740 fused_ordering(51) 00:13:34.740 fused_ordering(52) 00:13:34.740 fused_ordering(53) 00:13:34.740 fused_ordering(54) 00:13:34.740 fused_ordering(55) 00:13:34.740 fused_ordering(56) 00:13:34.740 fused_ordering(57) 00:13:34.740 fused_ordering(58) 00:13:34.740 fused_ordering(59) 00:13:34.740 fused_ordering(60) 00:13:34.740 fused_ordering(61) 00:13:34.740 fused_ordering(62) 00:13:34.740 fused_ordering(63) 00:13:34.740 fused_ordering(64) 00:13:34.740 fused_ordering(65) 00:13:34.740 fused_ordering(66) 00:13:34.740 fused_ordering(67) 00:13:34.740 fused_ordering(68) 00:13:34.740 fused_ordering(69) 00:13:34.740 fused_ordering(70) 00:13:34.740 fused_ordering(71) 00:13:34.740 fused_ordering(72) 00:13:34.740 fused_ordering(73) 00:13:34.740 fused_ordering(74) 00:13:34.740 fused_ordering(75) 00:13:34.740 fused_ordering(76) 00:13:34.740 fused_ordering(77) 00:13:34.740 fused_ordering(78) 00:13:34.740 fused_ordering(79) 00:13:34.740 fused_ordering(80) 00:13:34.740 fused_ordering(81) 00:13:34.740 fused_ordering(82) 00:13:34.740 fused_ordering(83) 00:13:34.740 fused_ordering(84) 00:13:34.740 fused_ordering(85) 00:13:34.740 fused_ordering(86) 00:13:34.740 fused_ordering(87) 00:13:34.740 fused_ordering(88) 00:13:34.740 fused_ordering(89) 00:13:34.740 fused_ordering(90) 00:13:34.740 fused_ordering(91) 00:13:34.740 fused_ordering(92) 00:13:34.740 fused_ordering(93) 00:13:34.740 fused_ordering(94) 00:13:34.740 fused_ordering(95) 00:13:34.740 fused_ordering(96) 00:13:34.740 fused_ordering(97) 00:13:34.740 fused_ordering(98) 00:13:34.740 fused_ordering(99) 00:13:34.740 fused_ordering(100) 00:13:34.740 fused_ordering(101) 00:13:34.740 fused_ordering(102) 00:13:34.740 fused_ordering(103) 00:13:34.740 fused_ordering(104) 00:13:34.740 fused_ordering(105) 00:13:34.740 fused_ordering(106) 00:13:34.740 fused_ordering(107) 00:13:34.740 fused_ordering(108) 00:13:34.740 fused_ordering(109) 00:13:34.740 fused_ordering(110) 00:13:34.740 fused_ordering(111) 00:13:34.740 fused_ordering(112) 00:13:34.740 fused_ordering(113) 00:13:34.740 fused_ordering(114) 00:13:34.740 fused_ordering(115) 00:13:34.740 fused_ordering(116) 00:13:34.740 fused_ordering(117) 00:13:34.740 fused_ordering(118) 00:13:34.740 fused_ordering(119) 00:13:34.740 fused_ordering(120) 00:13:34.740 fused_ordering(121) 00:13:34.740 fused_ordering(122) 00:13:34.740 fused_ordering(123) 00:13:34.740 fused_ordering(124) 00:13:34.740 fused_ordering(125) 00:13:34.740 fused_ordering(126) 00:13:34.740 fused_ordering(127) 00:13:34.740 fused_ordering(128) 00:13:34.740 fused_ordering(129) 00:13:34.740 fused_ordering(130) 00:13:34.740 fused_ordering(131) 00:13:34.740 fused_ordering(132) 00:13:34.740 fused_ordering(133) 00:13:34.740 fused_ordering(134) 00:13:34.740 fused_ordering(135) 00:13:34.740 fused_ordering(136) 00:13:34.740 fused_ordering(137) 00:13:34.740 fused_ordering(138) 00:13:34.740 fused_ordering(139) 00:13:34.740 fused_ordering(140) 00:13:34.740 fused_ordering(141) 00:13:34.740 fused_ordering(142) 00:13:34.740 fused_ordering(143) 00:13:34.740 fused_ordering(144) 00:13:34.740 fused_ordering(145) 00:13:34.740 fused_ordering(146) 00:13:34.740 fused_ordering(147) 00:13:34.740 fused_ordering(148) 00:13:34.740 fused_ordering(149) 00:13:34.740 fused_ordering(150) 00:13:34.740 fused_ordering(151) 00:13:34.740 fused_ordering(152) 00:13:34.740 fused_ordering(153) 00:13:34.740 fused_ordering(154) 00:13:34.740 fused_ordering(155) 00:13:34.740 fused_ordering(156) 00:13:34.740 fused_ordering(157) 00:13:34.740 fused_ordering(158) 00:13:34.740 fused_ordering(159) 00:13:34.740 fused_ordering(160) 00:13:34.740 fused_ordering(161) 00:13:34.740 fused_ordering(162) 00:13:34.740 fused_ordering(163) 00:13:34.740 fused_ordering(164) 00:13:34.740 fused_ordering(165) 00:13:34.740 fused_ordering(166) 00:13:34.740 fused_ordering(167) 00:13:34.740 fused_ordering(168) 00:13:34.740 fused_ordering(169) 00:13:34.740 fused_ordering(170) 00:13:34.740 fused_ordering(171) 00:13:34.740 fused_ordering(172) 00:13:34.740 fused_ordering(173) 00:13:34.740 fused_ordering(174) 00:13:34.740 fused_ordering(175) 00:13:34.740 fused_ordering(176) 00:13:34.740 fused_ordering(177) 00:13:34.740 fused_ordering(178) 00:13:34.740 fused_ordering(179) 00:13:34.740 fused_ordering(180) 00:13:34.740 fused_ordering(181) 00:13:34.740 fused_ordering(182) 00:13:34.740 fused_ordering(183) 00:13:34.740 fused_ordering(184) 00:13:34.740 fused_ordering(185) 00:13:34.740 fused_ordering(186) 00:13:34.740 fused_ordering(187) 00:13:34.740 fused_ordering(188) 00:13:34.740 fused_ordering(189) 00:13:34.740 fused_ordering(190) 00:13:34.740 fused_ordering(191) 00:13:34.740 fused_ordering(192) 00:13:34.740 fused_ordering(193) 00:13:34.740 fused_ordering(194) 00:13:34.740 fused_ordering(195) 00:13:34.740 fused_ordering(196) 00:13:34.740 fused_ordering(197) 00:13:34.740 fused_ordering(198) 00:13:34.740 fused_ordering(199) 00:13:34.740 fused_ordering(200) 00:13:34.740 fused_ordering(201) 00:13:34.740 fused_ordering(202) 00:13:34.740 fused_ordering(203) 00:13:34.740 fused_ordering(204) 00:13:34.740 fused_ordering(205) 00:13:35.307 fused_ordering(206) 00:13:35.307 fused_ordering(207) 00:13:35.307 fused_ordering(208) 00:13:35.307 fused_ordering(209) 00:13:35.307 fused_ordering(210) 00:13:35.307 fused_ordering(211) 00:13:35.307 fused_ordering(212) 00:13:35.307 fused_ordering(213) 00:13:35.307 fused_ordering(214) 00:13:35.307 fused_ordering(215) 00:13:35.307 fused_ordering(216) 00:13:35.307 fused_ordering(217) 00:13:35.307 fused_ordering(218) 00:13:35.307 fused_ordering(219) 00:13:35.307 fused_ordering(220) 00:13:35.307 fused_ordering(221) 00:13:35.307 fused_ordering(222) 00:13:35.307 fused_ordering(223) 00:13:35.307 fused_ordering(224) 00:13:35.307 fused_ordering(225) 00:13:35.307 fused_ordering(226) 00:13:35.307 fused_ordering(227) 00:13:35.307 fused_ordering(228) 00:13:35.307 fused_ordering(229) 00:13:35.307 fused_ordering(230) 00:13:35.307 fused_ordering(231) 00:13:35.307 fused_ordering(232) 00:13:35.307 fused_ordering(233) 00:13:35.307 fused_ordering(234) 00:13:35.307 fused_ordering(235) 00:13:35.307 fused_ordering(236) 00:13:35.307 fused_ordering(237) 00:13:35.307 fused_ordering(238) 00:13:35.307 fused_ordering(239) 00:13:35.307 fused_ordering(240) 00:13:35.307 fused_ordering(241) 00:13:35.307 fused_ordering(242) 00:13:35.307 fused_ordering(243) 00:13:35.307 fused_ordering(244) 00:13:35.307 fused_ordering(245) 00:13:35.307 fused_ordering(246) 00:13:35.307 fused_ordering(247) 00:13:35.307 fused_ordering(248) 00:13:35.307 fused_ordering(249) 00:13:35.307 fused_ordering(250) 00:13:35.307 fused_ordering(251) 00:13:35.307 fused_ordering(252) 00:13:35.307 fused_ordering(253) 00:13:35.307 fused_ordering(254) 00:13:35.307 fused_ordering(255) 00:13:35.307 fused_ordering(256) 00:13:35.307 fused_ordering(257) 00:13:35.307 fused_ordering(258) 00:13:35.307 fused_ordering(259) 00:13:35.307 fused_ordering(260) 00:13:35.307 fused_ordering(261) 00:13:35.307 fused_ordering(262) 00:13:35.307 fused_ordering(263) 00:13:35.307 fused_ordering(264) 00:13:35.307 fused_ordering(265) 00:13:35.307 fused_ordering(266) 00:13:35.307 fused_ordering(267) 00:13:35.307 fused_ordering(268) 00:13:35.307 fused_ordering(269) 00:13:35.307 fused_ordering(270) 00:13:35.307 fused_ordering(271) 00:13:35.307 fused_ordering(272) 00:13:35.307 fused_ordering(273) 00:13:35.307 fused_ordering(274) 00:13:35.307 fused_ordering(275) 00:13:35.307 fused_ordering(276) 00:13:35.307 fused_ordering(277) 00:13:35.307 fused_ordering(278) 00:13:35.307 fused_ordering(279) 00:13:35.307 fused_ordering(280) 00:13:35.307 fused_ordering(281) 00:13:35.307 fused_ordering(282) 00:13:35.307 fused_ordering(283) 00:13:35.307 fused_ordering(284) 00:13:35.307 fused_ordering(285) 00:13:35.307 fused_ordering(286) 00:13:35.307 fused_ordering(287) 00:13:35.307 fused_ordering(288) 00:13:35.307 fused_ordering(289) 00:13:35.307 fused_ordering(290) 00:13:35.307 fused_ordering(291) 00:13:35.307 fused_ordering(292) 00:13:35.307 fused_ordering(293) 00:13:35.307 fused_ordering(294) 00:13:35.307 fused_ordering(295) 00:13:35.307 fused_ordering(296) 00:13:35.307 fused_ordering(297) 00:13:35.307 fused_ordering(298) 00:13:35.307 fused_ordering(299) 00:13:35.307 fused_ordering(300) 00:13:35.307 fused_ordering(301) 00:13:35.307 fused_ordering(302) 00:13:35.307 fused_ordering(303) 00:13:35.307 fused_ordering(304) 00:13:35.307 fused_ordering(305) 00:13:35.307 fused_ordering(306) 00:13:35.307 fused_ordering(307) 00:13:35.307 fused_ordering(308) 00:13:35.307 fused_ordering(309) 00:13:35.307 fused_ordering(310) 00:13:35.307 fused_ordering(311) 00:13:35.307 fused_ordering(312) 00:13:35.307 fused_ordering(313) 00:13:35.307 fused_ordering(314) 00:13:35.307 fused_ordering(315) 00:13:35.307 fused_ordering(316) 00:13:35.307 fused_ordering(317) 00:13:35.307 fused_ordering(318) 00:13:35.307 fused_ordering(319) 00:13:35.307 fused_ordering(320) 00:13:35.307 fused_ordering(321) 00:13:35.307 fused_ordering(322) 00:13:35.307 fused_ordering(323) 00:13:35.307 fused_ordering(324) 00:13:35.307 fused_ordering(325) 00:13:35.307 fused_ordering(326) 00:13:35.307 fused_ordering(327) 00:13:35.307 fused_ordering(328) 00:13:35.307 fused_ordering(329) 00:13:35.307 fused_ordering(330) 00:13:35.307 fused_ordering(331) 00:13:35.307 fused_ordering(332) 00:13:35.307 fused_ordering(333) 00:13:35.307 fused_ordering(334) 00:13:35.307 fused_ordering(335) 00:13:35.307 fused_ordering(336) 00:13:35.307 fused_ordering(337) 00:13:35.307 fused_ordering(338) 00:13:35.307 fused_ordering(339) 00:13:35.307 fused_ordering(340) 00:13:35.307 fused_ordering(341) 00:13:35.307 fused_ordering(342) 00:13:35.307 fused_ordering(343) 00:13:35.307 fused_ordering(344) 00:13:35.307 fused_ordering(345) 00:13:35.307 fused_ordering(346) 00:13:35.307 fused_ordering(347) 00:13:35.307 fused_ordering(348) 00:13:35.307 fused_ordering(349) 00:13:35.307 fused_ordering(350) 00:13:35.307 fused_ordering(351) 00:13:35.307 fused_ordering(352) 00:13:35.307 fused_ordering(353) 00:13:35.307 fused_ordering(354) 00:13:35.307 fused_ordering(355) 00:13:35.307 fused_ordering(356) 00:13:35.307 fused_ordering(357) 00:13:35.307 fused_ordering(358) 00:13:35.307 fused_ordering(359) 00:13:35.307 fused_ordering(360) 00:13:35.307 fused_ordering(361) 00:13:35.307 fused_ordering(362) 00:13:35.307 fused_ordering(363) 00:13:35.307 fused_ordering(364) 00:13:35.307 fused_ordering(365) 00:13:35.307 fused_ordering(366) 00:13:35.307 fused_ordering(367) 00:13:35.307 fused_ordering(368) 00:13:35.307 fused_ordering(369) 00:13:35.307 fused_ordering(370) 00:13:35.307 fused_ordering(371) 00:13:35.307 fused_ordering(372) 00:13:35.307 fused_ordering(373) 00:13:35.307 fused_ordering(374) 00:13:35.307 fused_ordering(375) 00:13:35.307 fused_ordering(376) 00:13:35.307 fused_ordering(377) 00:13:35.307 fused_ordering(378) 00:13:35.307 fused_ordering(379) 00:13:35.307 fused_ordering(380) 00:13:35.307 fused_ordering(381) 00:13:35.307 fused_ordering(382) 00:13:35.307 fused_ordering(383) 00:13:35.307 fused_ordering(384) 00:13:35.307 fused_ordering(385) 00:13:35.307 fused_ordering(386) 00:13:35.307 fused_ordering(387) 00:13:35.307 fused_ordering(388) 00:13:35.307 fused_ordering(389) 00:13:35.307 fused_ordering(390) 00:13:35.307 fused_ordering(391) 00:13:35.307 fused_ordering(392) 00:13:35.307 fused_ordering(393) 00:13:35.307 fused_ordering(394) 00:13:35.307 fused_ordering(395) 00:13:35.307 fused_ordering(396) 00:13:35.307 fused_ordering(397) 00:13:35.307 fused_ordering(398) 00:13:35.307 fused_ordering(399) 00:13:35.307 fused_ordering(400) 00:13:35.307 fused_ordering(401) 00:13:35.307 fused_ordering(402) 00:13:35.307 fused_ordering(403) 00:13:35.307 fused_ordering(404) 00:13:35.307 fused_ordering(405) 00:13:35.307 fused_ordering(406) 00:13:35.307 fused_ordering(407) 00:13:35.307 fused_ordering(408) 00:13:35.307 fused_ordering(409) 00:13:35.307 fused_ordering(410) 00:13:35.873 fused_ordering(411) 00:13:35.873 fused_ordering(412) 00:13:35.873 fused_ordering(413) 00:13:35.873 fused_ordering(414) 00:13:35.873 fused_ordering(415) 00:13:35.873 fused_ordering(416) 00:13:35.873 fused_ordering(417) 00:13:35.873 fused_ordering(418) 00:13:35.873 fused_ordering(419) 00:13:35.873 fused_ordering(420) 00:13:35.873 fused_ordering(421) 00:13:35.873 fused_ordering(422) 00:13:35.873 fused_ordering(423) 00:13:35.873 fused_ordering(424) 00:13:35.873 fused_ordering(425) 00:13:35.873 fused_ordering(426) 00:13:35.873 fused_ordering(427) 00:13:35.873 fused_ordering(428) 00:13:35.873 fused_ordering(429) 00:13:35.873 fused_ordering(430) 00:13:35.873 fused_ordering(431) 00:13:35.873 fused_ordering(432) 00:13:35.873 fused_ordering(433) 00:13:35.873 fused_ordering(434) 00:13:35.873 fused_ordering(435) 00:13:35.873 fused_ordering(436) 00:13:35.873 fused_ordering(437) 00:13:35.873 fused_ordering(438) 00:13:35.873 fused_ordering(439) 00:13:35.873 fused_ordering(440) 00:13:35.873 fused_ordering(441) 00:13:35.873 fused_ordering(442) 00:13:35.873 fused_ordering(443) 00:13:35.873 fused_ordering(444) 00:13:35.873 fused_ordering(445) 00:13:35.873 fused_ordering(446) 00:13:35.873 fused_ordering(447) 00:13:35.873 fused_ordering(448) 00:13:35.873 fused_ordering(449) 00:13:35.873 fused_ordering(450) 00:13:35.873 fused_ordering(451) 00:13:35.873 fused_ordering(452) 00:13:35.873 fused_ordering(453) 00:13:35.873 fused_ordering(454) 00:13:35.873 fused_ordering(455) 00:13:35.873 fused_ordering(456) 00:13:35.873 fused_ordering(457) 00:13:35.873 fused_ordering(458) 00:13:35.873 fused_ordering(459) 00:13:35.873 fused_ordering(460) 00:13:35.873 fused_ordering(461) 00:13:35.873 fused_ordering(462) 00:13:35.873 fused_ordering(463) 00:13:35.873 fused_ordering(464) 00:13:35.873 fused_ordering(465) 00:13:35.873 fused_ordering(466) 00:13:35.873 fused_ordering(467) 00:13:35.873 fused_ordering(468) 00:13:35.873 fused_ordering(469) 00:13:35.873 fused_ordering(470) 00:13:35.873 fused_ordering(471) 00:13:35.873 fused_ordering(472) 00:13:35.873 fused_ordering(473) 00:13:35.873 fused_ordering(474) 00:13:35.873 fused_ordering(475) 00:13:35.873 fused_ordering(476) 00:13:35.873 fused_ordering(477) 00:13:35.873 fused_ordering(478) 00:13:35.873 fused_ordering(479) 00:13:35.873 fused_ordering(480) 00:13:35.873 fused_ordering(481) 00:13:35.873 fused_ordering(482) 00:13:35.873 fused_ordering(483) 00:13:35.873 fused_ordering(484) 00:13:35.873 fused_ordering(485) 00:13:35.873 fused_ordering(486) 00:13:35.873 fused_ordering(487) 00:13:35.873 fused_ordering(488) 00:13:35.873 fused_ordering(489) 00:13:35.873 fused_ordering(490) 00:13:35.873 fused_ordering(491) 00:13:35.873 fused_ordering(492) 00:13:35.873 fused_ordering(493) 00:13:35.873 fused_ordering(494) 00:13:35.873 fused_ordering(495) 00:13:35.873 fused_ordering(496) 00:13:35.873 fused_ordering(497) 00:13:35.873 fused_ordering(498) 00:13:35.873 fused_ordering(499) 00:13:35.873 fused_ordering(500) 00:13:35.873 fused_ordering(501) 00:13:35.873 fused_ordering(502) 00:13:35.873 fused_ordering(503) 00:13:35.873 fused_ordering(504) 00:13:35.873 fused_ordering(505) 00:13:35.873 fused_ordering(506) 00:13:35.873 fused_ordering(507) 00:13:35.873 fused_ordering(508) 00:13:35.873 fused_ordering(509) 00:13:35.873 fused_ordering(510) 00:13:35.873 fused_ordering(511) 00:13:35.873 fused_ordering(512) 00:13:35.873 fused_ordering(513) 00:13:35.873 fused_ordering(514) 00:13:35.873 fused_ordering(515) 00:13:35.873 fused_ordering(516) 00:13:35.873 fused_ordering(517) 00:13:35.873 fused_ordering(518) 00:13:35.873 fused_ordering(519) 00:13:35.873 fused_ordering(520) 00:13:35.873 fused_ordering(521) 00:13:35.873 fused_ordering(522) 00:13:35.873 fused_ordering(523) 00:13:35.873 fused_ordering(524) 00:13:35.873 fused_ordering(525) 00:13:35.873 fused_ordering(526) 00:13:35.873 fused_ordering(527) 00:13:35.873 fused_ordering(528) 00:13:35.873 fused_ordering(529) 00:13:35.873 fused_ordering(530) 00:13:35.873 fused_ordering(531) 00:13:35.873 fused_ordering(532) 00:13:35.873 fused_ordering(533) 00:13:35.873 fused_ordering(534) 00:13:35.873 fused_ordering(535) 00:13:35.873 fused_ordering(536) 00:13:35.873 fused_ordering(537) 00:13:35.873 fused_ordering(538) 00:13:35.873 fused_ordering(539) 00:13:35.873 fused_ordering(540) 00:13:35.873 fused_ordering(541) 00:13:35.873 fused_ordering(542) 00:13:35.873 fused_ordering(543) 00:13:35.873 fused_ordering(544) 00:13:35.873 fused_ordering(545) 00:13:35.873 fused_ordering(546) 00:13:35.873 fused_ordering(547) 00:13:35.873 fused_ordering(548) 00:13:35.874 fused_ordering(549) 00:13:35.874 fused_ordering(550) 00:13:35.874 fused_ordering(551) 00:13:35.874 fused_ordering(552) 00:13:35.874 fused_ordering(553) 00:13:35.874 fused_ordering(554) 00:13:35.874 fused_ordering(555) 00:13:35.874 fused_ordering(556) 00:13:35.874 fused_ordering(557) 00:13:35.874 fused_ordering(558) 00:13:35.874 fused_ordering(559) 00:13:35.874 fused_ordering(560) 00:13:35.874 fused_ordering(561) 00:13:35.874 fused_ordering(562) 00:13:35.874 fused_ordering(563) 00:13:35.874 fused_ordering(564) 00:13:35.874 fused_ordering(565) 00:13:35.874 fused_ordering(566) 00:13:35.874 fused_ordering(567) 00:13:35.874 fused_ordering(568) 00:13:35.874 fused_ordering(569) 00:13:35.874 fused_ordering(570) 00:13:35.874 fused_ordering(571) 00:13:35.874 fused_ordering(572) 00:13:35.874 fused_ordering(573) 00:13:35.874 fused_ordering(574) 00:13:35.874 fused_ordering(575) 00:13:35.874 fused_ordering(576) 00:13:35.874 fused_ordering(577) 00:13:35.874 fused_ordering(578) 00:13:35.874 fused_ordering(579) 00:13:35.874 fused_ordering(580) 00:13:35.874 fused_ordering(581) 00:13:35.874 fused_ordering(582) 00:13:35.874 fused_ordering(583) 00:13:35.874 fused_ordering(584) 00:13:35.874 fused_ordering(585) 00:13:35.874 fused_ordering(586) 00:13:35.874 fused_ordering(587) 00:13:35.874 fused_ordering(588) 00:13:35.874 fused_ordering(589) 00:13:35.874 fused_ordering(590) 00:13:35.874 fused_ordering(591) 00:13:35.874 fused_ordering(592) 00:13:35.874 fused_ordering(593) 00:13:35.874 fused_ordering(594) 00:13:35.874 fused_ordering(595) 00:13:35.874 fused_ordering(596) 00:13:35.874 fused_ordering(597) 00:13:35.874 fused_ordering(598) 00:13:35.874 fused_ordering(599) 00:13:35.874 fused_ordering(600) 00:13:35.874 fused_ordering(601) 00:13:35.874 fused_ordering(602) 00:13:35.874 fused_ordering(603) 00:13:35.874 fused_ordering(604) 00:13:35.874 fused_ordering(605) 00:13:35.874 fused_ordering(606) 00:13:35.874 fused_ordering(607) 00:13:35.874 fused_ordering(608) 00:13:35.874 fused_ordering(609) 00:13:35.874 fused_ordering(610) 00:13:35.874 fused_ordering(611) 00:13:35.874 fused_ordering(612) 00:13:35.874 fused_ordering(613) 00:13:35.874 fused_ordering(614) 00:13:35.874 fused_ordering(615) 00:13:36.807 fused_ordering(616) 00:13:36.807 fused_ordering(617) 00:13:36.807 fused_ordering(618) 00:13:36.807 fused_ordering(619) 00:13:36.807 fused_ordering(620) 00:13:36.807 fused_ordering(621) 00:13:36.807 fused_ordering(622) 00:13:36.807 fused_ordering(623) 00:13:36.807 fused_ordering(624) 00:13:36.807 fused_ordering(625) 00:13:36.807 fused_ordering(626) 00:13:36.807 fused_ordering(627) 00:13:36.807 fused_ordering(628) 00:13:36.807 fused_ordering(629) 00:13:36.807 fused_ordering(630) 00:13:36.807 fused_ordering(631) 00:13:36.807 fused_ordering(632) 00:13:36.807 fused_ordering(633) 00:13:36.807 fused_ordering(634) 00:13:36.807 fused_ordering(635) 00:13:36.807 fused_ordering(636) 00:13:36.807 fused_ordering(637) 00:13:36.807 fused_ordering(638) 00:13:36.807 fused_ordering(639) 00:13:36.807 fused_ordering(640) 00:13:36.807 fused_ordering(641) 00:13:36.807 fused_ordering(642) 00:13:36.807 fused_ordering(643) 00:13:36.807 fused_ordering(644) 00:13:36.807 fused_ordering(645) 00:13:36.807 fused_ordering(646) 00:13:36.807 fused_ordering(647) 00:13:36.807 fused_ordering(648) 00:13:36.807 fused_ordering(649) 00:13:36.807 fused_ordering(650) 00:13:36.807 fused_ordering(651) 00:13:36.807 fused_ordering(652) 00:13:36.807 fused_ordering(653) 00:13:36.807 fused_ordering(654) 00:13:36.807 fused_ordering(655) 00:13:36.807 fused_ordering(656) 00:13:36.807 fused_ordering(657) 00:13:36.807 fused_ordering(658) 00:13:36.807 fused_ordering(659) 00:13:36.807 fused_ordering(660) 00:13:36.807 fused_ordering(661) 00:13:36.807 fused_ordering(662) 00:13:36.807 fused_ordering(663) 00:13:36.807 fused_ordering(664) 00:13:36.807 fused_ordering(665) 00:13:36.807 fused_ordering(666) 00:13:36.807 fused_ordering(667) 00:13:36.807 fused_ordering(668) 00:13:36.807 fused_ordering(669) 00:13:36.807 fused_ordering(670) 00:13:36.807 fused_ordering(671) 00:13:36.807 fused_ordering(672) 00:13:36.807 fused_ordering(673) 00:13:36.807 fused_ordering(674) 00:13:36.807 fused_ordering(675) 00:13:36.807 fused_ordering(676) 00:13:36.807 fused_ordering(677) 00:13:36.807 fused_ordering(678) 00:13:36.807 fused_ordering(679) 00:13:36.807 fused_ordering(680) 00:13:36.807 fused_ordering(681) 00:13:36.807 fused_ordering(682) 00:13:36.807 fused_ordering(683) 00:13:36.807 fused_ordering(684) 00:13:36.807 fused_ordering(685) 00:13:36.807 fused_ordering(686) 00:13:36.807 fused_ordering(687) 00:13:36.807 fused_ordering(688) 00:13:36.807 fused_ordering(689) 00:13:36.807 fused_ordering(690) 00:13:36.807 fused_ordering(691) 00:13:36.807 fused_ordering(692) 00:13:36.807 fused_ordering(693) 00:13:36.807 fused_ordering(694) 00:13:36.807 fused_ordering(695) 00:13:36.807 fused_ordering(696) 00:13:36.807 fused_ordering(697) 00:13:36.807 fused_ordering(698) 00:13:36.807 fused_ordering(699) 00:13:36.807 fused_ordering(700) 00:13:36.807 fused_ordering(701) 00:13:36.807 fused_ordering(702) 00:13:36.807 fused_ordering(703) 00:13:36.807 fused_ordering(704) 00:13:36.807 fused_ordering(705) 00:13:36.807 fused_ordering(706) 00:13:36.807 fused_ordering(707) 00:13:36.807 fused_ordering(708) 00:13:36.807 fused_ordering(709) 00:13:36.807 fused_ordering(710) 00:13:36.807 fused_ordering(711) 00:13:36.807 fused_ordering(712) 00:13:36.807 fused_ordering(713) 00:13:36.807 fused_ordering(714) 00:13:36.807 fused_ordering(715) 00:13:36.807 fused_ordering(716) 00:13:36.807 fused_ordering(717) 00:13:36.807 fused_ordering(718) 00:13:36.807 fused_ordering(719) 00:13:36.807 fused_ordering(720) 00:13:36.807 fused_ordering(721) 00:13:36.807 fused_ordering(722) 00:13:36.807 fused_ordering(723) 00:13:36.807 fused_ordering(724) 00:13:36.807 fused_ordering(725) 00:13:36.807 fused_ordering(726) 00:13:36.807 fused_ordering(727) 00:13:36.807 fused_ordering(728) 00:13:36.807 fused_ordering(729) 00:13:36.807 fused_ordering(730) 00:13:36.807 fused_ordering(731) 00:13:36.807 fused_ordering(732) 00:13:36.807 fused_ordering(733) 00:13:36.807 fused_ordering(734) 00:13:36.807 fused_ordering(735) 00:13:36.807 fused_ordering(736) 00:13:36.807 fused_ordering(737) 00:13:36.807 fused_ordering(738) 00:13:36.807 fused_ordering(739) 00:13:36.807 fused_ordering(740) 00:13:36.807 fused_ordering(741) 00:13:36.807 fused_ordering(742) 00:13:36.807 fused_ordering(743) 00:13:36.807 fused_ordering(744) 00:13:36.807 fused_ordering(745) 00:13:36.807 fused_ordering(746) 00:13:36.807 fused_ordering(747) 00:13:36.807 fused_ordering(748) 00:13:36.807 fused_ordering(749) 00:13:36.807 fused_ordering(750) 00:13:36.807 fused_ordering(751) 00:13:36.808 fused_ordering(752) 00:13:36.808 fused_ordering(753) 00:13:36.808 fused_ordering(754) 00:13:36.808 fused_ordering(755) 00:13:36.808 fused_ordering(756) 00:13:36.808 fused_ordering(757) 00:13:36.808 fused_ordering(758) 00:13:36.808 fused_ordering(759) 00:13:36.808 fused_ordering(760) 00:13:36.808 fused_ordering(761) 00:13:36.808 fused_ordering(762) 00:13:36.808 fused_ordering(763) 00:13:36.808 fused_ordering(764) 00:13:36.808 fused_ordering(765) 00:13:36.808 fused_ordering(766) 00:13:36.808 fused_ordering(767) 00:13:36.808 fused_ordering(768) 00:13:36.808 fused_ordering(769) 00:13:36.808 fused_ordering(770) 00:13:36.808 fused_ordering(771) 00:13:36.808 fused_ordering(772) 00:13:36.808 fused_ordering(773) 00:13:36.808 fused_ordering(774) 00:13:36.808 fused_ordering(775) 00:13:36.808 fused_ordering(776) 00:13:36.808 fused_ordering(777) 00:13:36.808 fused_ordering(778) 00:13:36.808 fused_ordering(779) 00:13:36.808 fused_ordering(780) 00:13:36.808 fused_ordering(781) 00:13:36.808 fused_ordering(782) 00:13:36.808 fused_ordering(783) 00:13:36.808 fused_ordering(784) 00:13:36.808 fused_ordering(785) 00:13:36.808 fused_ordering(786) 00:13:36.808 fused_ordering(787) 00:13:36.808 fused_ordering(788) 00:13:36.808 fused_ordering(789) 00:13:36.808 fused_ordering(790) 00:13:36.808 fused_ordering(791) 00:13:36.808 fused_ordering(792) 00:13:36.808 fused_ordering(793) 00:13:36.808 fused_ordering(794) 00:13:36.808 fused_ordering(795) 00:13:36.808 fused_ordering(796) 00:13:36.808 fused_ordering(797) 00:13:36.808 fused_ordering(798) 00:13:36.808 fused_ordering(799) 00:13:36.808 fused_ordering(800) 00:13:36.808 fused_ordering(801) 00:13:36.808 fused_ordering(802) 00:13:36.808 fused_ordering(803) 00:13:36.808 fused_ordering(804) 00:13:36.808 fused_ordering(805) 00:13:36.808 fused_ordering(806) 00:13:36.808 fused_ordering(807) 00:13:36.808 fused_ordering(808) 00:13:36.808 fused_ordering(809) 00:13:36.808 fused_ordering(810) 00:13:36.808 fused_ordering(811) 00:13:36.808 fused_ordering(812) 00:13:36.808 fused_ordering(813) 00:13:36.808 fused_ordering(814) 00:13:36.808 fused_ordering(815) 00:13:36.808 fused_ordering(816) 00:13:36.808 fused_ordering(817) 00:13:36.808 fused_ordering(818) 00:13:36.808 fused_ordering(819) 00:13:36.808 fused_ordering(820) 00:13:37.744 fused_ordering(821) 00:13:37.744 fused_ordering(822) 00:13:37.744 fused_ordering(823) 00:13:37.744 fused_ordering(824) 00:13:37.744 fused_ordering(825) 00:13:37.744 fused_ordering(826) 00:13:37.744 fused_ordering(827) 00:13:37.744 fused_ordering(828) 00:13:37.744 fused_ordering(829) 00:13:37.744 fused_ordering(830) 00:13:37.744 fused_ordering(831) 00:13:37.744 fused_ordering(832) 00:13:37.744 fused_ordering(833) 00:13:37.744 fused_ordering(834) 00:13:37.744 fused_ordering(835) 00:13:37.744 fused_ordering(836) 00:13:37.744 fused_ordering(837) 00:13:37.744 fused_ordering(838) 00:13:37.744 fused_ordering(839) 00:13:37.744 fused_ordering(840) 00:13:37.744 fused_ordering(841) 00:13:37.744 fused_ordering(842) 00:13:37.744 fused_ordering(843) 00:13:37.744 fused_ordering(844) 00:13:37.744 fused_ordering(845) 00:13:37.744 fused_ordering(846) 00:13:37.744 fused_ordering(847) 00:13:37.744 fused_ordering(848) 00:13:37.744 fused_ordering(849) 00:13:37.744 fused_ordering(850) 00:13:37.744 fused_ordering(851) 00:13:37.744 fused_ordering(852) 00:13:37.744 fused_ordering(853) 00:13:37.744 fused_ordering(854) 00:13:37.744 fused_ordering(855) 00:13:37.744 fused_ordering(856) 00:13:37.744 fused_ordering(857) 00:13:37.744 fused_ordering(858) 00:13:37.744 fused_ordering(859) 00:13:37.745 fused_ordering(860) 00:13:37.745 fused_ordering(861) 00:13:37.745 fused_ordering(862) 00:13:37.745 fused_ordering(863) 00:13:37.745 fused_ordering(864) 00:13:37.745 fused_ordering(865) 00:13:37.745 fused_ordering(866) 00:13:37.745 fused_ordering(867) 00:13:37.745 fused_ordering(868) 00:13:37.745 fused_ordering(869) 00:13:37.745 fused_ordering(870) 00:13:37.745 fused_ordering(871) 00:13:37.745 fused_ordering(872) 00:13:37.745 fused_ordering(873) 00:13:37.745 fused_ordering(874) 00:13:37.745 fused_ordering(875) 00:13:37.745 fused_ordering(876) 00:13:37.745 fused_ordering(877) 00:13:37.745 fused_ordering(878) 00:13:37.745 fused_ordering(879) 00:13:37.745 fused_ordering(880) 00:13:37.745 fused_ordering(881) 00:13:37.745 fused_ordering(882) 00:13:37.745 fused_ordering(883) 00:13:37.745 fused_ordering(884) 00:13:37.745 fused_ordering(885) 00:13:37.745 fused_ordering(886) 00:13:37.745 fused_ordering(887) 00:13:37.745 fused_ordering(888) 00:13:37.745 fused_ordering(889) 00:13:37.745 fused_ordering(890) 00:13:37.745 fused_ordering(891) 00:13:37.745 fused_ordering(892) 00:13:37.745 fused_ordering(893) 00:13:37.745 fused_ordering(894) 00:13:37.745 fused_ordering(895) 00:13:37.745 fused_ordering(896) 00:13:37.745 fused_ordering(897) 00:13:37.745 fused_ordering(898) 00:13:37.745 fused_ordering(899) 00:13:37.745 fused_ordering(900) 00:13:37.745 fused_ordering(901) 00:13:37.745 fused_ordering(902) 00:13:37.745 fused_ordering(903) 00:13:37.745 fused_ordering(904) 00:13:37.745 fused_ordering(905) 00:13:37.745 fused_ordering(906) 00:13:37.745 fused_ordering(907) 00:13:37.745 fused_ordering(908) 00:13:37.745 fused_ordering(909) 00:13:37.745 fused_ordering(910) 00:13:37.745 fused_ordering(911) 00:13:37.745 fused_ordering(912) 00:13:37.745 fused_ordering(913) 00:13:37.745 fused_ordering(914) 00:13:37.745 fused_ordering(915) 00:13:37.745 fused_ordering(916) 00:13:37.745 fused_ordering(917) 00:13:37.745 fused_ordering(918) 00:13:37.745 fused_ordering(919) 00:13:37.745 fused_ordering(920) 00:13:37.745 fused_ordering(921) 00:13:37.745 fused_ordering(922) 00:13:37.745 fused_ordering(923) 00:13:37.745 fused_ordering(924) 00:13:37.745 fused_ordering(925) 00:13:37.745 fused_ordering(926) 00:13:37.745 fused_ordering(927) 00:13:37.745 fused_ordering(928) 00:13:37.745 fused_ordering(929) 00:13:37.745 fused_ordering(930) 00:13:37.745 fused_ordering(931) 00:13:37.745 fused_ordering(932) 00:13:37.745 fused_ordering(933) 00:13:37.745 fused_ordering(934) 00:13:37.745 fused_ordering(935) 00:13:37.745 fused_ordering(936) 00:13:37.745 fused_ordering(937) 00:13:37.745 fused_ordering(938) 00:13:37.745 fused_ordering(939) 00:13:37.745 fused_ordering(940) 00:13:37.745 fused_ordering(941) 00:13:37.745 fused_ordering(942) 00:13:37.745 fused_ordering(943) 00:13:37.745 fused_ordering(944) 00:13:37.745 fused_ordering(945) 00:13:37.745 fused_ordering(946) 00:13:37.745 fused_ordering(947) 00:13:37.745 fused_ordering(948) 00:13:37.745 fused_ordering(949) 00:13:37.745 fused_ordering(950) 00:13:37.745 fused_ordering(951) 00:13:37.745 fused_ordering(952) 00:13:37.745 fused_ordering(953) 00:13:37.745 fused_ordering(954) 00:13:37.745 fused_ordering(955) 00:13:37.745 fused_ordering(956) 00:13:37.745 fused_ordering(957) 00:13:37.745 fused_ordering(958) 00:13:37.745 fused_ordering(959) 00:13:37.745 fused_ordering(960) 00:13:37.745 fused_ordering(961) 00:13:37.745 fused_ordering(962) 00:13:37.745 fused_ordering(963) 00:13:37.745 fused_ordering(964) 00:13:37.745 fused_ordering(965) 00:13:37.745 fused_ordering(966) 00:13:37.745 fused_ordering(967) 00:13:37.745 fused_ordering(968) 00:13:37.745 fused_ordering(969) 00:13:37.745 fused_ordering(970) 00:13:37.745 fused_ordering(971) 00:13:37.745 fused_ordering(972) 00:13:37.745 fused_ordering(973) 00:13:37.745 fused_ordering(974) 00:13:37.745 fused_ordering(975) 00:13:37.745 fused_ordering(976) 00:13:37.745 fused_ordering(977) 00:13:37.745 fused_ordering(978) 00:13:37.745 fused_ordering(979) 00:13:37.745 fused_ordering(980) 00:13:37.745 fused_ordering(981) 00:13:37.745 fused_ordering(982) 00:13:37.745 fused_ordering(983) 00:13:37.745 fused_ordering(984) 00:13:37.745 fused_ordering(985) 00:13:37.745 fused_ordering(986) 00:13:37.745 fused_ordering(987) 00:13:37.745 fused_ordering(988) 00:13:37.745 fused_ordering(989) 00:13:37.745 fused_ordering(990) 00:13:37.745 fused_ordering(991) 00:13:37.745 fused_ordering(992) 00:13:37.745 fused_ordering(993) 00:13:37.745 fused_ordering(994) 00:13:37.745 fused_ordering(995) 00:13:37.745 fused_ordering(996) 00:13:37.745 fused_ordering(997) 00:13:37.745 fused_ordering(998) 00:13:37.745 fused_ordering(999) 00:13:37.745 fused_ordering(1000) 00:13:37.745 fused_ordering(1001) 00:13:37.745 fused_ordering(1002) 00:13:37.745 fused_ordering(1003) 00:13:37.745 fused_ordering(1004) 00:13:37.745 fused_ordering(1005) 00:13:37.745 fused_ordering(1006) 00:13:37.745 fused_ordering(1007) 00:13:37.745 fused_ordering(1008) 00:13:37.745 fused_ordering(1009) 00:13:37.745 fused_ordering(1010) 00:13:37.745 fused_ordering(1011) 00:13:37.745 fused_ordering(1012) 00:13:37.745 fused_ordering(1013) 00:13:37.745 fused_ordering(1014) 00:13:37.745 fused_ordering(1015) 00:13:37.745 fused_ordering(1016) 00:13:37.745 fused_ordering(1017) 00:13:37.745 fused_ordering(1018) 00:13:37.745 fused_ordering(1019) 00:13:37.745 fused_ordering(1020) 00:13:37.745 fused_ordering(1021) 00:13:37.745 fused_ordering(1022) 00:13:37.745 fused_ordering(1023) 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.745 rmmod nvme_tcp 00:13:37.745 rmmod nvme_fabrics 00:13:37.745 rmmod nvme_keyring 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1622771 ']' 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1622771 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1622771 ']' 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1622771 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1622771 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1622771' 00:13:37.745 killing process with pid 1622771 00:13:37.745 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1622771 00:13:37.746 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1622771 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:38.003 19:05:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.534 00:13:40.534 real 0m9.556s 00:13:40.534 user 0m6.558s 00:13:40.534 sys 0m4.881s 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:40.534 ************************************ 00:13:40.534 END TEST nvmf_fused_ordering 00:13:40.534 ************************************ 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.534 ************************************ 00:13:40.534 START TEST nvmf_ns_masking 00:13:40.534 ************************************ 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:40.534 * Looking for test storage... 00:13:40.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.534 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c4721534-f970-4c1d-b6a3-294ed69e4dc0 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=80f4e903-c246-4ece-9398-4695965c9722 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=200ec27a-0ec6-4387-a492-462cc78a2fac 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.535 19:05:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.068 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.068 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.068 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.068 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:43.069 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:43.069 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:43.069 Found net devices under 0000:84:00.0: cvl_0_0 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:43.069 Found net devices under 0000:84:00.1: cvl_0_1 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:13:43.069 00:13:43.069 --- 10.0.0.2 ping statistics --- 00:13:43.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.069 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:13:43.069 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:13:43.070 00:13:43.070 --- 10.0.0.1 ping statistics --- 00:13:43.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.070 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1625271 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1625271 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1625271 ']' 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.070 19:05:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.070 [2024-07-24 19:05:48.762852] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:13:43.070 [2024-07-24 19:05:48.762984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.329 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.329 [2024-07-24 19:05:48.877220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.588 [2024-07-24 19:05:49.045051] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.588 [2024-07-24 19:05:49.045122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.588 [2024-07-24 19:05:49.045148] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.588 [2024-07-24 19:05:49.045170] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.588 [2024-07-24 19:05:49.045189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.588 [2024-07-24 19:05:49.045237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.588 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.588 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:43.588 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.588 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.588 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.588 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.588 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.155 [2024-07-24 19:05:49.593698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.155 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:44.155 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:44.155 19:05:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:44.413 Malloc1 00:13:44.413 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:44.671 Malloc2 00:13:44.930 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.189 19:05:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:45.768 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.048 [2024-07-24 19:05:51.629578] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.048 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:46.048 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 200ec27a-0ec6-4387-a492-462cc78a2fac -a 10.0.0.2 -s 4420 -i 4 00:13:46.306 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.306 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.306 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.306 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:46.306 19:05:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:48.209 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:48.467 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:48.467 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:48.467 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:48.467 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:48.467 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:48.467 [ 0]:0x1 00:13:48.467 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:48.467 19:05:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:48.467 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=352197764d6f4b2fbd1844054e5c366d 00:13:48.467 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 352197764d6f4b2fbd1844054e5c366d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:48.467 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:49.033 [ 0]:0x1 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=352197764d6f4b2fbd1844054e5c366d 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 352197764d6f4b2fbd1844054e5c366d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:49.033 [ 1]:0x2 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d9cfba7dba1435bbd18e0d58b09faf0 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d9cfba7dba1435bbd18e0d58b09faf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.033 19:05:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.598 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:49.858 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:49.858 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 200ec27a-0ec6-4387-a492-462cc78a2fac -a 10.0.0.2 -s 4420 -i 4 00:13:50.116 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:50.116 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:50.116 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:50.116 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:50.116 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:50.116 19:05:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:52.019 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:52.277 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:52.277 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:52.277 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:52.277 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:52.277 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:52.277 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.278 [ 0]:0x2 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d9cfba7dba1435bbd18e0d58b09faf0 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d9cfba7dba1435bbd18e0d58b09faf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:52.278 19:05:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:52.844 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:52.844 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:52.844 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:53.102 [ 0]:0x1 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=352197764d6f4b2fbd1844054e5c366d 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 352197764d6f4b2fbd1844054e5c366d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:53.102 [ 1]:0x2 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d9cfba7dba1435bbd18e0d58b09faf0 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d9cfba7dba1435bbd18e0d58b09faf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.102 19:05:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:53.668 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:53.669 [ 0]:0x2 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d9cfba7dba1435bbd18e0d58b09faf0 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d9cfba7dba1435bbd18e0d58b09faf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:53.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.669 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:54.235 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:54.235 19:05:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 200ec27a-0ec6-4387-a492-462cc78a2fac -a 10.0.0.2 -s 4420 -i 4 00:13:54.493 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:54.493 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:54.493 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.493 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:54.493 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:54.493 19:06:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:57.025 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:57.026 [ 0]:0x1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=352197764d6f4b2fbd1844054e5c366d 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 352197764d6f4b2fbd1844054e5c366d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:57.026 [ 1]:0x2 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d9cfba7dba1435bbd18e0d58b09faf0 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d9cfba7dba1435bbd18e0d58b09faf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.026 [ 0]:0x2 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d9cfba7dba1435bbd18e0d58b09faf0 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d9cfba7dba1435bbd18e0d58b09faf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:57.026 19:06:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:57.593 [2024-07-24 19:06:03.054609] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:57.593 request: 00:13:57.593 { 00:13:57.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.593 "nsid": 2, 00:13:57.593 "host": "nqn.2016-06.io.spdk:host1", 00:13:57.593 "method": "nvmf_ns_remove_host", 00:13:57.593 "req_id": 1 00:13:57.593 } 00:13:57.593 Got JSON-RPC error response 00:13:57.593 response: 00:13:57.593 { 00:13:57.593 "code": -32602, 00:13:57.593 "message": "Invalid parameters" 00:13:57.593 } 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:57.593 [ 0]:0x2 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7d9cfba7dba1435bbd18e0d58b09faf0 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7d9cfba7dba1435bbd18e0d58b09faf0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1627144 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1627144 /var/tmp/host.sock 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1627144 ']' 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:57.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.593 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:57.851 [2024-07-24 19:06:03.297153] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:13:57.851 [2024-07-24 19:06:03.297266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1627144 ] 00:13:57.851 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.851 [2024-07-24 19:06:03.382600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.851 [2024-07-24 19:06:03.524520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.416 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.416 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:58.416 19:06:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.674 19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.241 19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c4721534-f970-4c1d-b6a3-294ed69e4dc0 00:13:59.241 19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:59.241 19:06:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C4721534F9704C1DB6A3294ED69E4DC0 -i 00:13:59.499 19:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 80f4e903-c246-4ece-9398-4695965c9722 00:13:59.499 19:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:59.499 19:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 80F4E903C2464ECE93984695965C9722 -i 00:14:00.064 19:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:00.322 19:06:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:00.579 19:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:00.579 19:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:01.145 nvme0n1 00:14:01.145 19:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:01.145 19:06:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:01.401 nvme1n2 00:14:01.401 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:01.401 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:01.401 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:01.401 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:01.401 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:01.966 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:01.966 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:01.966 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:01.966 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:02.224 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c4721534-f970-4c1d-b6a3-294ed69e4dc0 == \c\4\7\2\1\5\3\4\-\f\9\7\0\-\4\c\1\d\-\b\6\a\3\-\2\9\4\e\d\6\9\e\4\d\c\0 ]] 00:14:02.224 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:02.224 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:02.224 19:06:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 80f4e903-c246-4ece-9398-4695965c9722 == \8\0\f\4\e\9\0\3\-\c\2\4\6\-\4\e\c\e\-\9\3\9\8\-\4\6\9\5\9\6\5\c\9\7\2\2 ]] 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1627144 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1627144 ']' 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1627144 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1627144 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1627144' 00:14:02.814 killing process with pid 1627144 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1627144 00:14:02.814 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1627144 00:14:03.388 19:06:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.647 rmmod nvme_tcp 00:14:03.647 rmmod nvme_fabrics 00:14:03.647 rmmod nvme_keyring 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1625271 ']' 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1625271 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1625271 ']' 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1625271 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1625271 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1625271' 00:14:03.647 killing process with pid 1625271 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1625271 00:14:03.647 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1625271 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.212 19:06:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:06.113 00:14:06.113 real 0m26.071s 00:14:06.113 user 0m36.626s 00:14:06.113 sys 0m5.529s 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:06.113 ************************************ 00:14:06.113 END TEST nvmf_ns_masking 00:14:06.113 ************************************ 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.113 19:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.371 ************************************ 00:14:06.371 START TEST nvmf_nvme_cli 00:14:06.371 ************************************ 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:06.371 * Looking for test storage... 00:14:06.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.371 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.372 19:06:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:08.902 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:09.161 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:09.161 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:09.161 Found net devices under 0000:84:00.0: cvl_0_0 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:09.161 Found net devices under 0000:84:00.1: cvl_0_1 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.161 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:14:09.162 00:14:09.162 --- 10.0.0.2 ping statistics --- 00:14:09.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.162 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:14:09.162 00:14:09.162 --- 10.0.0.1 ping statistics --- 00:14:09.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.162 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1630418 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1630418 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1630418 ']' 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.162 19:06:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.162 [2024-07-24 19:06:14.847907] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:14:09.162 [2024-07-24 19:06:14.848013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.420 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.420 [2024-07-24 19:06:14.936424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.420 [2024-07-24 19:06:15.077317] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.420 [2024-07-24 19:06:15.077388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.420 [2024-07-24 19:06:15.077408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.420 [2024-07-24 19:06:15.077437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.420 [2024-07-24 19:06:15.077453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.420 [2024-07-24 19:06:15.077526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.420 [2024-07-24 19:06:15.077612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.420 [2024-07-24 19:06:15.077672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:09.420 [2024-07-24 19:06:15.077677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 [2024-07-24 19:06:15.268790] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 Malloc0 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 Malloc1 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 [2024-07-24 19:06:15.362843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.679 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:09.937 00:14:09.937 Discovery Log Number of Records 2, Generation counter 2 00:14:09.937 =====Discovery Log Entry 0====== 00:14:09.937 trtype: tcp 00:14:09.937 adrfam: ipv4 00:14:09.937 subtype: current discovery subsystem 00:14:09.937 treq: not required 00:14:09.937 portid: 0 00:14:09.937 trsvcid: 4420 00:14:09.938 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:09.938 traddr: 10.0.0.2 00:14:09.938 eflags: explicit discovery connections, duplicate discovery information 00:14:09.938 sectype: none 00:14:09.938 =====Discovery Log Entry 1====== 00:14:09.938 trtype: tcp 00:14:09.938 adrfam: ipv4 00:14:09.938 subtype: nvme subsystem 00:14:09.938 treq: not required 00:14:09.938 portid: 0 00:14:09.938 trsvcid: 4420 00:14:09.938 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:09.938 traddr: 10.0.0.2 00:14:09.938 eflags: none 00:14:09.938 sectype: none 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:09.938 19:06:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.504 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:10.504 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:10.504 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.504 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:10.504 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:10.504 19:06:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:13.032 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:13.032 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:13.032 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.032 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:13.032 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.032 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:13.033 /dev/nvme0n1 ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.033 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.291 rmmod nvme_tcp 00:14:13.291 rmmod nvme_fabrics 00:14:13.291 rmmod nvme_keyring 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1630418 ']' 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1630418 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1630418 ']' 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1630418 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1630418 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1630418' 00:14:13.291 killing process with pid 1630418 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1630418 00:14:13.291 19:06:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1630418 00:14:13.858 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.858 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.858 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.858 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.858 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.859 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.859 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.859 19:06:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:15.763 00:14:15.763 real 0m9.527s 00:14:15.763 user 0m17.023s 00:14:15.763 sys 0m2.927s 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:15.763 ************************************ 00:14:15.763 END TEST nvmf_nvme_cli 00:14:15.763 ************************************ 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.763 ************************************ 00:14:15.763 START TEST nvmf_vfio_user 00:14:15.763 ************************************ 00:14:15.763 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:16.023 * Looking for test storage... 00:14:16.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.023 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1631335 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1631335' 00:14:16.024 Process pid: 1631335 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1631335 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1631335 ']' 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.024 19:06:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:16.024 [2024-07-24 19:06:21.573835] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:14:16.024 [2024-07-24 19:06:21.573935] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.024 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.024 [2024-07-24 19:06:21.679793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.283 [2024-07-24 19:06:21.878683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.283 [2024-07-24 19:06:21.878797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.283 [2024-07-24 19:06:21.878834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.283 [2024-07-24 19:06:21.878864] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.283 [2024-07-24 19:06:21.878891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.283 [2024-07-24 19:06:21.879061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.283 [2024-07-24 19:06:21.879102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.283 [2024-07-24 19:06:21.879159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.283 [2024-07-24 19:06:21.879163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.216 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.216 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:17.216 19:06:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:18.151 19:06:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:18.409 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:18.409 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:18.409 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:18.409 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:18.409 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:18.976 Malloc1 00:14:18.976 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:19.234 19:06:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:19.492 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:20.058 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:20.058 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:20.058 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:20.331 Malloc2 00:14:20.331 19:06:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:20.910 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:21.168 19:06:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:21.425 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:21.425 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:21.426 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:21.426 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:21.426 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:21.426 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:21.426 [2024-07-24 19:06:27.082458] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:14:21.426 [2024-07-24 19:06:27.082557] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1632018 ] 00:14:21.426 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.685 [2024-07-24 19:06:27.130761] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:21.685 [2024-07-24 19:06:27.141977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:21.685 [2024-07-24 19:06:27.142025] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2ad56a9000 00:14:21.685 [2024-07-24 19:06:27.142971] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.685 [2024-07-24 19:06:27.143965] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.685 [2024-07-24 19:06:27.144967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.685 [2024-07-24 19:06:27.145977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.685 [2024-07-24 19:06:27.146986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.685 [2024-07-24 19:06:27.148006] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.686 [2024-07-24 19:06:27.148997] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:21.686 [2024-07-24 19:06:27.149996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:21.686 [2024-07-24 19:06:27.151014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:21.686 [2024-07-24 19:06:27.151042] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2ad569e000 00:14:21.686 [2024-07-24 19:06:27.152612] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:21.686 [2024-07-24 19:06:27.171888] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:21.686 [2024-07-24 19:06:27.171935] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:21.686 [2024-07-24 19:06:27.177200] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:21.686 [2024-07-24 19:06:27.177281] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:21.686 [2024-07-24 19:06:27.177414] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:21.686 [2024-07-24 19:06:27.177460] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:21.686 [2024-07-24 19:06:27.177476] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:21.686 [2024-07-24 19:06:27.178184] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:21.686 [2024-07-24 19:06:27.178214] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:21.686 [2024-07-24 19:06:27.178233] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:21.686 [2024-07-24 19:06:27.179192] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:21.686 [2024-07-24 19:06:27.179215] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:21.686 [2024-07-24 19:06:27.179234] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:21.686 [2024-07-24 19:06:27.180197] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:21.686 [2024-07-24 19:06:27.180225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:21.686 [2024-07-24 19:06:27.181204] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:21.686 [2024-07-24 19:06:27.181229] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:21.686 [2024-07-24 19:06:27.181241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:21.686 [2024-07-24 19:06:27.181257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:21.686 [2024-07-24 19:06:27.181369] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:21.686 [2024-07-24 19:06:27.181381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:21.686 [2024-07-24 19:06:27.181393] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:21.686 [2024-07-24 19:06:27.182215] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:21.686 [2024-07-24 19:06:27.183218] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:21.686 [2024-07-24 19:06:27.184230] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:21.686 [2024-07-24 19:06:27.185223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:21.686 [2024-07-24 19:06:27.185378] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:21.686 [2024-07-24 19:06:27.186246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:21.686 [2024-07-24 19:06:27.186270] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:21.686 [2024-07-24 19:06:27.186282] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186315] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:21.686 [2024-07-24 19:06:27.186334] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186366] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.686 [2024-07-24 19:06:27.186379] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.686 [2024-07-24 19:06:27.186388] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.686 [2024-07-24 19:06:27.186411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.686 [2024-07-24 19:06:27.186507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:21.686 [2024-07-24 19:06:27.186530] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:21.686 [2024-07-24 19:06:27.186546] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:21.686 [2024-07-24 19:06:27.186557] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:21.686 [2024-07-24 19:06:27.186568] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:21.686 [2024-07-24 19:06:27.186579] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:21.686 [2024-07-24 19:06:27.186589] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:21.686 [2024-07-24 19:06:27.186600] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186617] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:21.686 [2024-07-24 19:06:27.186671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:21.686 [2024-07-24 19:06:27.186698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.686 [2024-07-24 19:06:27.186717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.686 [2024-07-24 19:06:27.186733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.686 [2024-07-24 19:06:27.186749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.686 [2024-07-24 19:06:27.186761] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:21.686 [2024-07-24 19:06:27.186821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:21.686 [2024-07-24 19:06:27.186835] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:21.686 [2024-07-24 19:06:27.186846] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.186898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:21.686 [2024-07-24 19:06:27.186914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:21.686 [2024-07-24 19:06:27.187005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.187026] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.187049] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:21.686 [2024-07-24 19:06:27.187061] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:21.686 [2024-07-24 19:06:27.187070] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.686 [2024-07-24 19:06:27.187082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:21.686 [2024-07-24 19:06:27.187105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:21.686 [2024-07-24 19:06:27.187126] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:21.686 [2024-07-24 19:06:27.187153] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.187172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:21.686 [2024-07-24 19:06:27.187189] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.686 [2024-07-24 19:06:27.187200] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.686 [2024-07-24 19:06:27.187208] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.687 [2024-07-24 19:06:27.187221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187284] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187321] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:21.687 [2024-07-24 19:06:27.187332] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.687 [2024-07-24 19:06:27.187341] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.687 [2024-07-24 19:06:27.187354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187393] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187408] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187457] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187469] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187496] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:21.687 [2024-07-24 19:06:27.187508] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:21.687 [2024-07-24 19:06:27.187519] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:21.687 [2024-07-24 19:06:27.187552] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187730] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:21.687 [2024-07-24 19:06:27.187744] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:21.687 [2024-07-24 19:06:27.187752] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:21.687 [2024-07-24 19:06:27.187761] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:21.687 [2024-07-24 19:06:27.187769] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:21.687 [2024-07-24 19:06:27.187782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:21.687 [2024-07-24 19:06:27.187798] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:21.687 [2024-07-24 19:06:27.187809] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:21.687 [2024-07-24 19:06:27.187817] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.687 [2024-07-24 19:06:27.187829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187844] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:21.687 [2024-07-24 19:06:27.187855] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:21.687 [2024-07-24 19:06:27.187863] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.687 [2024-07-24 19:06:27.187875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187891] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:21.687 [2024-07-24 19:06:27.187902] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:21.687 [2024-07-24 19:06:27.187910] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:21.687 [2024-07-24 19:06:27.187922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:21.687 [2024-07-24 19:06:27.187943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.187998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:21.687 [2024-07-24 19:06:27.188015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:21.687 ===================================================== 00:14:21.687 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:21.687 ===================================================== 00:14:21.687 Controller Capabilities/Features 00:14:21.687 ================================ 00:14:21.687 Vendor ID: 4e58 00:14:21.687 Subsystem Vendor ID: 4e58 00:14:21.687 Serial Number: SPDK1 00:14:21.687 Model Number: SPDK bdev Controller 00:14:21.687 Firmware Version: 24.09 00:14:21.687 Recommended Arb Burst: 6 00:14:21.687 IEEE OUI Identifier: 8d 6b 50 00:14:21.687 Multi-path I/O 00:14:21.687 May have multiple subsystem ports: Yes 00:14:21.687 May have multiple controllers: Yes 00:14:21.687 Associated with SR-IOV VF: No 00:14:21.687 Max Data Transfer Size: 131072 00:14:21.687 Max Number of Namespaces: 32 00:14:21.687 Max Number of I/O Queues: 127 00:14:21.687 NVMe Specification Version (VS): 1.3 00:14:21.687 NVMe Specification Version (Identify): 1.3 00:14:21.687 Maximum Queue Entries: 256 00:14:21.687 Contiguous Queues Required: Yes 00:14:21.687 Arbitration Mechanisms Supported 00:14:21.687 Weighted Round Robin: Not Supported 00:14:21.687 Vendor Specific: Not Supported 00:14:21.687 Reset Timeout: 15000 ms 00:14:21.687 Doorbell Stride: 4 bytes 00:14:21.687 NVM Subsystem Reset: Not Supported 00:14:21.687 Command Sets Supported 00:14:21.687 NVM Command Set: Supported 00:14:21.687 Boot Partition: Not Supported 00:14:21.687 Memory Page Size Minimum: 4096 bytes 00:14:21.687 Memory Page Size Maximum: 4096 bytes 00:14:21.687 Persistent Memory Region: Not Supported 00:14:21.687 Optional Asynchronous Events Supported 00:14:21.687 Namespace Attribute Notices: Supported 00:14:21.687 Firmware Activation Notices: Not Supported 00:14:21.687 ANA Change Notices: Not Supported 00:14:21.687 PLE Aggregate Log Change Notices: Not Supported 00:14:21.687 LBA Status Info Alert Notices: Not Supported 00:14:21.687 EGE Aggregate Log Change Notices: Not Supported 00:14:21.687 Normal NVM Subsystem Shutdown event: Not Supported 00:14:21.687 Zone Descriptor Change Notices: Not Supported 00:14:21.687 Discovery Log Change Notices: Not Supported 00:14:21.687 Controller Attributes 00:14:21.687 128-bit Host Identifier: Supported 00:14:21.687 Non-Operational Permissive Mode: Not Supported 00:14:21.687 NVM Sets: Not Supported 00:14:21.687 Read Recovery Levels: Not Supported 00:14:21.687 Endurance Groups: Not Supported 00:14:21.687 Predictable Latency Mode: Not Supported 00:14:21.687 Traffic Based Keep ALive: Not Supported 00:14:21.687 Namespace Granularity: Not Supported 00:14:21.687 SQ Associations: Not Supported 00:14:21.687 UUID List: Not Supported 00:14:21.687 Multi-Domain Subsystem: Not Supported 00:14:21.687 Fixed Capacity Management: Not Supported 00:14:21.687 Variable Capacity Management: Not Supported 00:14:21.687 Delete Endurance Group: Not Supported 00:14:21.687 Delete NVM Set: Not Supported 00:14:21.687 Extended LBA Formats Supported: Not Supported 00:14:21.687 Flexible Data Placement Supported: Not Supported 00:14:21.687 00:14:21.687 Controller Memory Buffer Support 00:14:21.687 ================================ 00:14:21.687 Supported: No 00:14:21.687 00:14:21.687 Persistent Memory Region Support 00:14:21.687 ================================ 00:14:21.687 Supported: No 00:14:21.687 00:14:21.687 Admin Command Set Attributes 00:14:21.687 ============================ 00:14:21.687 Security Send/Receive: Not Supported 00:14:21.687 Format NVM: Not Supported 00:14:21.687 Firmware Activate/Download: Not Supported 00:14:21.687 Namespace Management: Not Supported 00:14:21.687 Device Self-Test: Not Supported 00:14:21.687 Directives: Not Supported 00:14:21.687 NVMe-MI: Not Supported 00:14:21.688 Virtualization Management: Not Supported 00:14:21.688 Doorbell Buffer Config: Not Supported 00:14:21.688 Get LBA Status Capability: Not Supported 00:14:21.688 Command & Feature Lockdown Capability: Not Supported 00:14:21.688 Abort Command Limit: 4 00:14:21.688 Async Event Request Limit: 4 00:14:21.688 Number of Firmware Slots: N/A 00:14:21.688 Firmware Slot 1 Read-Only: N/A 00:14:21.688 Firmware Activation Without Reset: N/A 00:14:21.688 Multiple Update Detection Support: N/A 00:14:21.688 Firmware Update Granularity: No Information Provided 00:14:21.688 Per-Namespace SMART Log: No 00:14:21.688 Asymmetric Namespace Access Log Page: Not Supported 00:14:21.688 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:21.688 Command Effects Log Page: Supported 00:14:21.688 Get Log Page Extended Data: Supported 00:14:21.688 Telemetry Log Pages: Not Supported 00:14:21.688 Persistent Event Log Pages: Not Supported 00:14:21.688 Supported Log Pages Log Page: May Support 00:14:21.688 Commands Supported & Effects Log Page: Not Supported 00:14:21.688 Feature Identifiers & Effects Log Page:May Support 00:14:21.688 NVMe-MI Commands & Effects Log Page: May Support 00:14:21.688 Data Area 4 for Telemetry Log: Not Supported 00:14:21.688 Error Log Page Entries Supported: 128 00:14:21.688 Keep Alive: Supported 00:14:21.688 Keep Alive Granularity: 10000 ms 00:14:21.688 00:14:21.688 NVM Command Set Attributes 00:14:21.688 ========================== 00:14:21.688 Submission Queue Entry Size 00:14:21.688 Max: 64 00:14:21.688 Min: 64 00:14:21.688 Completion Queue Entry Size 00:14:21.688 Max: 16 00:14:21.688 Min: 16 00:14:21.688 Number of Namespaces: 32 00:14:21.688 Compare Command: Supported 00:14:21.688 Write Uncorrectable Command: Not Supported 00:14:21.688 Dataset Management Command: Supported 00:14:21.688 Write Zeroes Command: Supported 00:14:21.688 Set Features Save Field: Not Supported 00:14:21.688 Reservations: Not Supported 00:14:21.688 Timestamp: Not Supported 00:14:21.688 Copy: Supported 00:14:21.688 Volatile Write Cache: Present 00:14:21.688 Atomic Write Unit (Normal): 1 00:14:21.688 Atomic Write Unit (PFail): 1 00:14:21.688 Atomic Compare & Write Unit: 1 00:14:21.688 Fused Compare & Write: Supported 00:14:21.688 Scatter-Gather List 00:14:21.688 SGL Command Set: Supported (Dword aligned) 00:14:21.688 SGL Keyed: Not Supported 00:14:21.688 SGL Bit Bucket Descriptor: Not Supported 00:14:21.688 SGL Metadata Pointer: Not Supported 00:14:21.688 Oversized SGL: Not Supported 00:14:21.688 SGL Metadata Address: Not Supported 00:14:21.688 SGL Offset: Not Supported 00:14:21.688 Transport SGL Data Block: Not Supported 00:14:21.688 Replay Protected Memory Block: Not Supported 00:14:21.688 00:14:21.688 Firmware Slot Information 00:14:21.688 ========================= 00:14:21.688 Active slot: 1 00:14:21.688 Slot 1 Firmware Revision: 24.09 00:14:21.688 00:14:21.688 00:14:21.688 Commands Supported and Effects 00:14:21.688 ============================== 00:14:21.688 Admin Commands 00:14:21.688 -------------- 00:14:21.688 Get Log Page (02h): Supported 00:14:21.688 Identify (06h): Supported 00:14:21.688 Abort (08h): Supported 00:14:21.688 Set Features (09h): Supported 00:14:21.688 Get Features (0Ah): Supported 00:14:21.688 Asynchronous Event Request (0Ch): Supported 00:14:21.688 Keep Alive (18h): Supported 00:14:21.688 I/O Commands 00:14:21.688 ------------ 00:14:21.688 Flush (00h): Supported LBA-Change 00:14:21.688 Write (01h): Supported LBA-Change 00:14:21.688 Read (02h): Supported 00:14:21.688 Compare (05h): Supported 00:14:21.688 Write Zeroes (08h): Supported LBA-Change 00:14:21.688 Dataset Management (09h): Supported LBA-Change 00:14:21.688 Copy (19h): Supported LBA-Change 00:14:21.688 00:14:21.688 Error Log 00:14:21.688 ========= 00:14:21.688 00:14:21.688 Arbitration 00:14:21.688 =========== 00:14:21.688 Arbitration Burst: 1 00:14:21.688 00:14:21.688 Power Management 00:14:21.688 ================ 00:14:21.688 Number of Power States: 1 00:14:21.688 Current Power State: Power State #0 00:14:21.688 Power State #0: 00:14:21.688 Max Power: 0.00 W 00:14:21.688 Non-Operational State: Operational 00:14:21.688 Entry Latency: Not Reported 00:14:21.688 Exit Latency: Not Reported 00:14:21.688 Relative Read Throughput: 0 00:14:21.688 Relative Read Latency: 0 00:14:21.688 Relative Write Throughput: 0 00:14:21.688 Relative Write Latency: 0 00:14:21.688 Idle Power: Not Reported 00:14:21.688 Active Power: Not Reported 00:14:21.688 Non-Operational Permissive Mode: Not Supported 00:14:21.688 00:14:21.688 Health Information 00:14:21.688 ================== 00:14:21.688 Critical Warnings: 00:14:21.688 Available Spare Space: OK 00:14:21.688 Temperature: OK 00:14:21.688 Device Reliability: OK 00:14:21.688 Read Only: No 00:14:21.688 Volatile Memory Backup: OK 00:14:21.688 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:21.688 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:21.688 Available Spare: 0% 00:14:21.688 Available Sp[2024-07-24 19:06:27.188181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:21.688 [2024-07-24 19:06:27.188204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:21.688 [2024-07-24 19:06:27.188260] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:21.688 [2024-07-24 19:06:27.188283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.688 [2024-07-24 19:06:27.188298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.688 [2024-07-24 19:06:27.188312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.688 [2024-07-24 19:06:27.188325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.688 [2024-07-24 19:06:27.192445] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:21.688 [2024-07-24 19:06:27.192475] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:21.688 [2024-07-24 19:06:27.193288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:21.688 [2024-07-24 19:06:27.193390] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:21.688 [2024-07-24 19:06:27.193408] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:21.688 [2024-07-24 19:06:27.194299] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:21.688 [2024-07-24 19:06:27.194329] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:21.688 [2024-07-24 19:06:27.194405] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:21.688 [2024-07-24 19:06:27.196359] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:21.688 are Threshold: 0% 00:14:21.688 Life Percentage Used: 0% 00:14:21.688 Data Units Read: 0 00:14:21.688 Data Units Written: 0 00:14:21.688 Host Read Commands: 0 00:14:21.688 Host Write Commands: 0 00:14:21.688 Controller Busy Time: 0 minutes 00:14:21.688 Power Cycles: 0 00:14:21.688 Power On Hours: 0 hours 00:14:21.688 Unsafe Shutdowns: 0 00:14:21.688 Unrecoverable Media Errors: 0 00:14:21.688 Lifetime Error Log Entries: 0 00:14:21.688 Warning Temperature Time: 0 minutes 00:14:21.688 Critical Temperature Time: 0 minutes 00:14:21.688 00:14:21.688 Number of Queues 00:14:21.688 ================ 00:14:21.688 Number of I/O Submission Queues: 127 00:14:21.688 Number of I/O Completion Queues: 127 00:14:21.688 00:14:21.688 Active Namespaces 00:14:21.688 ================= 00:14:21.688 Namespace ID:1 00:14:21.688 Error Recovery Timeout: Unlimited 00:14:21.688 Command Set Identifier: NVM (00h) 00:14:21.688 Deallocate: Supported 00:14:21.688 Deallocated/Unwritten Error: Not Supported 00:14:21.688 Deallocated Read Value: Unknown 00:14:21.688 Deallocate in Write Zeroes: Not Supported 00:14:21.688 Deallocated Guard Field: 0xFFFF 00:14:21.688 Flush: Supported 00:14:21.688 Reservation: Supported 00:14:21.688 Namespace Sharing Capabilities: Multiple Controllers 00:14:21.688 Size (in LBAs): 131072 (0GiB) 00:14:21.688 Capacity (in LBAs): 131072 (0GiB) 00:14:21.688 Utilization (in LBAs): 131072 (0GiB) 00:14:21.688 NGUID: 46E893650AEA40C3AC4EF3C80FE05D7E 00:14:21.688 UUID: 46e89365-0aea-40c3-ac4e-f3c80fe05d7e 00:14:21.688 Thin Provisioning: Not Supported 00:14:21.688 Per-NS Atomic Units: Yes 00:14:21.688 Atomic Boundary Size (Normal): 0 00:14:21.688 Atomic Boundary Size (PFail): 0 00:14:21.688 Atomic Boundary Offset: 0 00:14:21.688 Maximum Single Source Range Length: 65535 00:14:21.689 Maximum Copy Length: 65535 00:14:21.689 Maximum Source Range Count: 1 00:14:21.689 NGUID/EUI64 Never Reused: No 00:14:21.689 Namespace Write Protected: No 00:14:21.689 Number of LBA Formats: 1 00:14:21.689 Current LBA Format: LBA Format #00 00:14:21.689 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:21.689 00:14:21.689 19:06:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:21.689 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.947 [2024-07-24 19:06:27.476480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:27.221 Initializing NVMe Controllers 00:14:27.221 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:27.221 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:27.221 Initialization complete. Launching workers. 00:14:27.221 ======================================================== 00:14:27.221 Latency(us) 00:14:27.221 Device Information : IOPS MiB/s Average min max 00:14:27.221 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24089.40 94.10 5317.97 1671.56 9618.64 00:14:27.221 ======================================================== 00:14:27.221 Total : 24089.40 94.10 5317.97 1671.56 9618.64 00:14:27.221 00:14:27.221 [2024-07-24 19:06:32.503505] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:27.221 19:06:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:27.221 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.221 [2024-07-24 19:06:32.785122] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:32.486 Initializing NVMe Controllers 00:14:32.486 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:32.486 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:32.486 Initialization complete. Launching workers. 00:14:32.486 ======================================================== 00:14:32.486 Latency(us) 00:14:32.486 Device Information : IOPS MiB/s Average min max 00:14:32.486 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16046.31 62.68 7981.73 6981.62 15110.73 00:14:32.486 ======================================================== 00:14:32.486 Total : 16046.31 62.68 7981.73 6981.62 15110.73 00:14:32.486 00:14:32.486 [2024-07-24 19:06:37.826510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:32.486 19:06:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:32.486 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.486 [2024-07-24 19:06:38.095933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:37.752 [2024-07-24 19:06:43.188917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:37.752 Initializing NVMe Controllers 00:14:37.752 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.752 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:37.752 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:37.752 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:37.752 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:37.752 Initialization complete. Launching workers. 00:14:37.752 Starting thread on core 2 00:14:37.752 Starting thread on core 3 00:14:37.752 Starting thread on core 1 00:14:37.752 19:06:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:37.752 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.009 [2024-07-24 19:06:43.563635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.288 [2024-07-24 19:06:46.623436] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.288 Initializing NVMe Controllers 00:14:41.288 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.288 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.288 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:41.288 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:41.288 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:41.288 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:41.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:41.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:41.288 Initialization complete. Launching workers. 00:14:41.288 Starting thread on core 1 with urgent priority queue 00:14:41.288 Starting thread on core 2 with urgent priority queue 00:14:41.288 Starting thread on core 3 with urgent priority queue 00:14:41.288 Starting thread on core 0 with urgent priority queue 00:14:41.288 SPDK bdev Controller (SPDK1 ) core 0: 4180.00 IO/s 23.92 secs/100000 ios 00:14:41.288 SPDK bdev Controller (SPDK1 ) core 1: 4003.00 IO/s 24.98 secs/100000 ios 00:14:41.288 SPDK bdev Controller (SPDK1 ) core 2: 4338.67 IO/s 23.05 secs/100000 ios 00:14:41.288 SPDK bdev Controller (SPDK1 ) core 3: 4429.00 IO/s 22.58 secs/100000 ios 00:14:41.288 ======================================================== 00:14:41.288 00:14:41.288 19:06:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:41.288 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.545 [2024-07-24 19:06:47.064085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:41.545 Initializing NVMe Controllers 00:14:41.545 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.545 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:41.545 Namespace ID: 1 size: 0GB 00:14:41.545 Initialization complete. 00:14:41.545 INFO: using host memory buffer for IO 00:14:41.545 Hello world! 00:14:41.545 [2024-07-24 19:06:47.098196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:41.545 19:06:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:41.802 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.059 [2024-07-24 19:06:47.521999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:42.993 Initializing NVMe Controllers 00:14:42.993 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:42.993 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:42.993 Initialization complete. Launching workers. 00:14:42.993 submit (in ns) avg, min, max = 9250.8, 4845.9, 4007517.0 00:14:42.993 complete (in ns) avg, min, max = 38684.1, 2869.6, 4012238.9 00:14:42.993 00:14:42.993 Submit histogram 00:14:42.993 ================ 00:14:42.993 Range in us Cumulative Count 00:14:42.993 4.836 - 4.859: 0.0629% ( 6) 00:14:42.993 4.859 - 4.883: 0.1677% ( 10) 00:14:42.993 4.883 - 4.907: 0.3563% ( 18) 00:14:42.993 4.907 - 4.930: 0.6707% ( 30) 00:14:42.993 4.930 - 4.954: 0.9851% ( 30) 00:14:42.993 4.954 - 4.978: 1.4777% ( 47) 00:14:42.993 4.978 - 5.001: 1.8026% ( 31) 00:14:42.993 5.001 - 5.025: 2.2846% ( 46) 00:14:42.993 5.025 - 5.049: 2.8401% ( 53) 00:14:42.993 5.049 - 5.073: 3.8881% ( 100) 00:14:42.993 5.073 - 5.096: 5.6382% ( 167) 00:14:42.993 5.096 - 5.120: 8.6041% ( 283) 00:14:42.993 5.120 - 5.144: 13.0476% ( 424) 00:14:42.993 5.144 - 5.167: 18.5705% ( 527) 00:14:42.993 5.167 - 5.191: 25.6864% ( 679) 00:14:42.993 5.191 - 5.215: 32.2469% ( 626) 00:14:42.993 5.215 - 5.239: 38.3253% ( 580) 00:14:42.993 5.239 - 5.262: 43.5758% ( 501) 00:14:42.993 5.262 - 5.286: 47.0656% ( 333) 00:14:42.993 5.286 - 5.310: 50.2934% ( 308) 00:14:42.993 5.310 - 5.333: 53.3326% ( 290) 00:14:42.993 5.333 - 5.357: 56.8015% ( 331) 00:14:42.993 5.357 - 5.381: 60.9411% ( 395) 00:14:42.993 5.381 - 5.404: 64.4624% ( 336) 00:14:42.993 5.404 - 5.428: 67.4701% ( 287) 00:14:42.993 5.428 - 5.452: 70.0377% ( 245) 00:14:42.993 5.452 - 5.476: 71.7669% ( 165) 00:14:42.993 5.476 - 5.499: 73.3913% ( 155) 00:14:42.993 5.499 - 5.523: 74.6594% ( 121) 00:14:42.993 5.523 - 5.547: 75.9484% ( 123) 00:14:42.993 5.547 - 5.570: 76.7240% ( 74) 00:14:42.993 5.570 - 5.594: 77.3842% ( 63) 00:14:42.993 5.594 - 5.618: 77.5519% ( 16) 00:14:42.993 5.618 - 5.641: 77.6567% ( 10) 00:14:42.993 5.641 - 5.665: 77.8139% ( 15) 00:14:42.993 5.665 - 5.689: 78.7990% ( 94) 00:14:42.993 5.689 - 5.713: 82.5927% ( 362) 00:14:42.993 5.713 - 5.736: 88.4091% ( 555) 00:14:42.993 5.736 - 5.760: 91.9409% ( 337) 00:14:42.993 5.760 - 5.784: 95.2211% ( 313) 00:14:42.993 5.784 - 5.807: 95.6613% ( 42) 00:14:42.993 5.807 - 5.831: 95.8814% ( 21) 00:14:42.993 5.831 - 5.855: 96.1538% ( 26) 00:14:42.993 5.855 - 5.879: 96.2586% ( 10) 00:14:42.993 5.879 - 5.902: 96.3844% ( 12) 00:14:42.993 5.902 - 5.926: 96.4892% ( 10) 00:14:42.993 5.926 - 5.950: 96.5416% ( 5) 00:14:42.993 5.950 - 5.973: 96.6674% ( 12) 00:14:42.993 5.973 - 5.997: 96.9503% ( 27) 00:14:42.993 5.997 - 6.021: 97.1704% ( 21) 00:14:42.993 6.021 - 6.044: 97.3905% ( 21) 00:14:42.993 6.044 - 6.068: 97.4219% ( 3) 00:14:42.993 6.068 - 6.116: 97.5267% ( 10) 00:14:42.993 6.116 - 6.163: 97.5477% ( 2) 00:14:42.993 6.163 - 6.210: 97.5582% ( 1) 00:14:42.993 6.210 - 6.258: 97.6001% ( 4) 00:14:42.993 6.258 - 6.305: 97.6210% ( 2) 00:14:42.993 6.353 - 6.400: 97.6315% ( 1) 00:14:42.993 6.447 - 6.495: 97.6420% ( 1) 00:14:42.993 6.495 - 6.542: 97.6630% ( 2) 00:14:42.993 6.542 - 6.590: 97.7049% ( 4) 00:14:42.993 6.590 - 6.637: 97.7678% ( 6) 00:14:42.993 6.637 - 6.684: 97.8516% ( 8) 00:14:42.993 6.684 - 6.732: 97.9459% ( 9) 00:14:42.993 6.732 - 6.779: 98.0193% ( 7) 00:14:42.993 6.779 - 6.827: 98.1136% ( 9) 00:14:42.993 6.827 - 6.874: 98.2079% ( 9) 00:14:42.993 6.874 - 6.921: 98.2813% ( 7) 00:14:42.993 6.921 - 6.969: 98.3546% ( 7) 00:14:42.993 6.969 - 7.016: 98.4175% ( 6) 00:14:42.993 7.016 - 7.064: 98.4594% ( 4) 00:14:42.993 7.064 - 7.111: 98.5118% ( 5) 00:14:42.993 7.111 - 7.159: 98.5538% ( 4) 00:14:42.993 7.159 - 7.206: 98.5747% ( 2) 00:14:42.993 7.206 - 7.253: 98.5852% ( 1) 00:14:42.993 7.253 - 7.301: 98.6586% ( 7) 00:14:42.993 7.301 - 7.348: 98.6690% ( 1) 00:14:42.993 7.348 - 7.396: 98.7005% ( 3) 00:14:42.993 7.396 - 7.443: 98.7110% ( 1) 00:14:42.993 7.490 - 7.538: 98.7319% ( 2) 00:14:42.993 7.538 - 7.585: 98.7424% ( 1) 00:14:42.993 7.585 - 7.633: 98.7634% ( 2) 00:14:42.993 7.633 - 7.680: 98.7948% ( 3) 00:14:42.993 7.727 - 7.775: 98.8053% ( 1) 00:14:42.993 7.775 - 7.822: 98.8262% ( 2) 00:14:42.993 7.822 - 7.870: 98.8367% ( 1) 00:14:42.993 7.870 - 7.917: 98.8472% ( 1) 00:14:42.993 7.964 - 8.012: 98.8682% ( 2) 00:14:42.993 8.059 - 8.107: 98.8786% ( 1) 00:14:42.993 8.344 - 8.391: 98.8891% ( 1) 00:14:42.993 8.533 - 8.581: 98.8996% ( 1) 00:14:42.993 8.818 - 8.865: 98.9101% ( 1) 00:14:42.993 9.244 - 9.292: 98.9310% ( 2) 00:14:42.993 9.387 - 9.434: 98.9415% ( 1) 00:14:42.993 9.434 - 9.481: 98.9520% ( 1) 00:14:42.993 9.624 - 9.671: 98.9625% ( 1) 00:14:42.993 9.719 - 9.766: 98.9730% ( 1) 00:14:42.993 9.766 - 9.813: 99.0044% ( 3) 00:14:42.993 10.050 - 10.098: 99.0254% ( 2) 00:14:42.993 10.098 - 10.145: 99.0358% ( 1) 00:14:42.993 10.145 - 10.193: 99.0463% ( 1) 00:14:42.993 10.240 - 10.287: 99.0568% ( 1) 00:14:42.993 10.382 - 10.430: 99.0673% ( 1) 00:14:42.993 10.430 - 10.477: 99.0778% ( 1) 00:14:42.994 10.477 - 10.524: 99.0882% ( 1) 00:14:42.994 10.572 - 10.619: 99.0987% ( 1) 00:14:42.994 10.619 - 10.667: 99.1197% ( 2) 00:14:42.994 10.714 - 10.761: 99.1302% ( 1) 00:14:42.994 10.761 - 10.809: 99.1406% ( 1) 00:14:42.994 10.856 - 10.904: 99.1511% ( 1) 00:14:42.994 10.904 - 10.951: 99.1721% ( 2) 00:14:42.994 10.999 - 11.046: 99.1826% ( 1) 00:14:42.994 11.046 - 11.093: 99.1930% ( 1) 00:14:42.994 11.093 - 11.141: 99.2035% ( 1) 00:14:42.994 11.141 - 11.188: 99.2140% ( 1) 00:14:42.994 11.236 - 11.283: 99.2350% ( 2) 00:14:42.994 11.283 - 11.330: 99.2559% ( 2) 00:14:42.994 11.330 - 11.378: 99.2769% ( 2) 00:14:42.994 11.378 - 11.425: 99.2874% ( 1) 00:14:42.994 11.473 - 11.520: 99.2978% ( 1) 00:14:42.994 11.520 - 11.567: 99.3083% ( 1) 00:14:42.994 11.567 - 11.615: 99.3188% ( 1) 00:14:42.994 11.710 - 11.757: 99.3293% ( 1) 00:14:42.994 11.804 - 11.852: 99.3607% ( 3) 00:14:42.994 11.852 - 11.899: 99.3712% ( 1) 00:14:42.994 11.899 - 11.947: 99.3817% ( 1) 00:14:42.994 11.947 - 11.994: 99.3922% ( 1) 00:14:42.994 11.994 - 12.041: 99.4026% ( 1) 00:14:42.994 12.231 - 12.326: 99.4131% ( 1) 00:14:42.994 12.326 - 12.421: 99.4236% ( 1) 00:14:42.994 12.516 - 12.610: 99.4341% ( 1) 00:14:42.994 12.895 - 12.990: 99.4446% ( 1) 00:14:42.994 12.990 - 13.084: 99.4550% ( 1) 00:14:42.994 13.369 - 13.464: 99.4655% ( 1) 00:14:42.994 13.843 - 13.938: 99.4760% ( 1) 00:14:42.994 14.317 - 14.412: 99.4865% ( 1) 00:14:42.994 14.507 - 14.601: 99.4970% ( 1) 00:14:42.994 14.601 - 14.696: 99.5074% ( 1) 00:14:42.994 14.696 - 14.791: 99.5179% ( 1) 00:14:42.994 15.076 - 15.170: 99.5284% ( 1) 00:14:42.994 15.550 - 15.644: 99.5494% ( 2) 00:14:42.994 15.644 - 15.739: 99.5598% ( 1) 00:14:42.994 19.058 - 19.153: 99.5703% ( 1) 00:14:42.994 19.153 - 19.247: 99.5808% ( 1) 00:14:42.994 19.342 - 19.437: 99.5913% ( 1) 00:14:42.994 19.532 - 19.627: 99.6018% ( 1) 00:14:42.994 19.627 - 19.721: 99.6227% ( 2) 00:14:42.994 19.721 - 19.816: 99.6646% ( 4) 00:14:42.994 19.816 - 19.911: 99.6751% ( 1) 00:14:42.994 19.911 - 20.006: 99.6856% ( 1) 00:14:42.994 20.006 - 20.101: 99.6961% ( 1) 00:14:42.994 20.101 - 20.196: 99.7066% ( 1) 00:14:42.994 20.196 - 20.290: 99.7170% ( 1) 00:14:42.994 20.290 - 20.385: 99.7485% ( 3) 00:14:42.994 20.385 - 20.480: 99.7694% ( 2) 00:14:42.994 20.480 - 20.575: 99.8009% ( 3) 00:14:42.994 20.575 - 20.670: 99.8218% ( 2) 00:14:42.994 20.670 - 20.764: 99.8428% ( 2) 00:14:42.994 20.764 - 20.859: 99.8533% ( 1) 00:14:42.994 20.954 - 21.049: 99.8638% ( 1) 00:14:42.994 21.428 - 21.523: 99.8742% ( 1) 00:14:42.994 22.850 - 22.945: 99.8952% ( 2) 00:14:42.994 26.359 - 26.548: 99.9057% ( 1) 00:14:42.994 3980.705 - 4004.978: 99.9371% ( 3) 00:14:42.994 4004.978 - 4029.250: 100.0000% ( 6) 00:14:42.994 00:14:42.994 Complete histogram 00:14:42.994 ================== 00:14:42.994 Range in us Cumulative Count 00:14:42.994 2.868 - 2.880: 0.1153% ( 11) 00:14:42.994 2.880 - 2.892: 1.8130% ( 162) 00:14:42.994 2.892 - 2.904: 4.0453% ( 213) 00:14:42.994 2.904 - 2.916: 4.5693% ( 50) 00:14:42.994 2.916 - 2.927: 4.7055% ( 13) 00:14:42.994 2.927 - 2.939: 5.2400% ( 51) 00:14:42.994 2.939 - 2.951: 5.8059% ( 54) 00:14:42.994 2.951 - 2.963: 6.0679% ( 25) 00:14:42.994 2.963 - 2.975: 6.1727% ( 10) 00:14:42.994 2.975 - 2.987: 6.3928% ( 21) 00:14:42.994 2.987 - 2.999: 6.4661% ( 7) 00:14:42.994 2.999 - 3.010: 8.2582% ( 171) 00:14:42.994 3.010 - 3.022: 30.7064% ( 2142) 00:14:42.994 3.022 - 3.034: 57.6609% ( 2572) 00:14:42.994 3.034 - 3.058: 68.0989% ( 996) 00:14:42.994 3.058 - 3.081: 84.3848% ( 1554) 00:14:42.994 3.081 - 3.105: 90.9872% ( 630) 00:14:42.994 3.105 - 3.129: 94.6342% ( 348) 00:14:42.994 3.129 - 3.153: 95.5460% ( 87) 00:14:42.994 3.153 - 3.176: 96.0176% ( 45) 00:14:42.994 3.176 - 3.200: 96.2691% ( 24) 00:14:42.994 3.200 - 3.224: 96.5416% ( 26) 00:14:42.994 3.224 - 3.247: 97.3800% ( 80) 00:14:42.994 3.247 - 3.271: 98.0402% ( 63) 00:14:42.994 3.271 - 3.295: 98.1241% ( 8) 00:14:42.994 3.295 - 3.319: 98.1765% ( 5) 00:14:42.994 3.319 - 3.342: 98.2394% ( 6) 00:14:42.994 3.342 - 3.366: 98.2708% ( 3) 00:14:42.994 3.366 - 3.390: 98.3127% ( 4) 00:14:42.994 3.390 - 3.413: 98.3442% ( 3) 00:14:42.994 3.437 - 3.461: 98.3756% ( 3) 00:14:42.994 3.461 - 3.484: 98.3966% ( 2) 00:14:42.994 3.508 - 3.532: 98.4070% ( 1) 00:14:42.994 3.721 - 3.745: 98.4175% ( 1) 00:14:42.994 3.887 - 3.911: 98.4280% ( 1) 00:14:42.994 3.911 - 3.935: 98.4385% ( 1) 00:14:42.994 3.959 - 3.982: 98.4490% ( 1) 00:14:42.994 4.527 - 4.551: 98.4594% ( 1) 00:14:42.994 4.670 - 4.693: 98.4909% ( 3) 00:14:42.994 4.764 - 4.788: 98.5118% ( 2) 00:14:42.994 4.788 - 4.812: 98.5223% ( 1) 00:14:42.994 4.836 - 4.859: 98.5328% ( 1) 00:14:42.994 4.859 - 4.883: 98.5747% ( 4) 00:14:42.994 4.883 - 4.907: 98.5852% ( 1) 00:14:42.994 4.907 - 4.930: 98.5957% ( 1) 00:14:42.994 4.930 - 4.954: 98.6062% ( 1) 00:14:42.994 4.954 - 4.978: 98.6166% ( 1) 00:14:42.994 5.025 - 5.049: 98.6271% ( 1) 00:14:42.994 5.049 - 5.073: 98.6376% ( 1) 00:14:42.994 5.167 - 5.191: 98.6481% ( 1) 00:14:42.994 5.215 - 5.239: 98.6900% ( 4) 00:14:42.994 5.286 - 5.310: 98.7005% ( 1) 00:14:42.994 5.333 - 5.357: 98.7214% ( 2) 00:14:42.994 5.381 - 5.404: 98.7319% ( 1) 00:14:42.994 5.499 - 5.523: 98.7424% ( 1) 00:14:42.994 5.594 - 5.618: 98.7529% ( 1) 00:14:42.994 6.400 - 6.447: 98.7634% ( 1) 00:14:42.994 7.822 - 7.870: 98.7738% ( 1) 00:14:42.994 7.917 - 7.964: 98.7843% ( 1) 00:14:42.994 8.012 - 8.059: 98.7948% ( 1) 00:14:42.994 8.107 - 8.154: 98.8053% ( 1) 00:14:42.994 8.486 - 8.533: 98.8158% ( 1) 00:14:42.994 8.533 - 8.581: 98.8262% ( 1) 00:14:42.994 8.865 - 8.913: 98.8367% ( 1) 00:14:42.994 8.913 - 8.960: 98.8472% ( 1) 00:14:42.994 9.244 - 9.292: 98.8577% ( 1) 00:14:42.994 9.387 - 9.434: 98.8682% ( 1) 00:14:42.994 9.481 - 9.529: 98.8786% ( 1) 00:14:42.994 10.145 - 10.193: 98.8891% ( 1) 00:14:42.994 10.382 - 10.430: 98.8996% ( 1) 00:14:42.994 10.714 - 10.761: 98.9101% ( 1) 00:14:42.994 10.999 - 11.046: 98.9206% ( 1) 00:14:42.994 11.093 - 11.141: 98.9310% ( 1) 00:14:42.994 11.188 - 11.236: 98.9415% ( 1) 00:14:42.994 15.170 - 15.265: 98.9520% ( 1) 00:14:42.994 17.161 - 17.256: 98.9625% ( 1) 00:14:42.994 17.256 - 17.351: 98.9730% ( 1) 00:14:42.994 17.730 - 17.825: 98.9834% ( 1) 00:14:42.994 17.825 - 17.920: 99.0044% ( 2) 00:14:42.994 17.920 - 18.015: 99.0254% ( 2) 00:14:42.994 18.110 - 18.204: 99.0463% ( 2) 00:14:42.994 18.204 - 18.299: 99.0568% ( 1) 00:14:42.994 18.394 - 18.489: 99.0882% ( 3) 00:14:42.994 18.773 - 18.868: 99.0987% ( 1) 00:14:42.994 18.868 - 18.963: 99.1092% ( 1) 00:14:42.994 3640.889 - 3665.161: 99.1197% ( 1) 00:14:42.994 3980.705 - 4004.978: 9[2024-07-24 19:06:48.548013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:42.994 9.8742% ( 72) 00:14:42.994 4004.978 - 4029.250: 100.0000% ( 12) 00:14:42.994 00:14:42.994 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:42.994 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:42.994 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:42.994 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:42.994 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:43.253 [ 00:14:43.253 { 00:14:43.253 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:43.253 "subtype": "Discovery", 00:14:43.253 "listen_addresses": [], 00:14:43.253 "allow_any_host": true, 00:14:43.253 "hosts": [] 00:14:43.253 }, 00:14:43.253 { 00:14:43.253 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:43.253 "subtype": "NVMe", 00:14:43.253 "listen_addresses": [ 00:14:43.253 { 00:14:43.253 "trtype": "VFIOUSER", 00:14:43.253 "adrfam": "IPv4", 00:14:43.253 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:43.253 "trsvcid": "0" 00:14:43.253 } 00:14:43.253 ], 00:14:43.253 "allow_any_host": true, 00:14:43.253 "hosts": [], 00:14:43.253 "serial_number": "SPDK1", 00:14:43.253 "model_number": "SPDK bdev Controller", 00:14:43.253 "max_namespaces": 32, 00:14:43.253 "min_cntlid": 1, 00:14:43.253 "max_cntlid": 65519, 00:14:43.253 "namespaces": [ 00:14:43.253 { 00:14:43.253 "nsid": 1, 00:14:43.253 "bdev_name": "Malloc1", 00:14:43.253 "name": "Malloc1", 00:14:43.253 "nguid": "46E893650AEA40C3AC4EF3C80FE05D7E", 00:14:43.253 "uuid": "46e89365-0aea-40c3-ac4e-f3c80fe05d7e" 00:14:43.253 } 00:14:43.253 ] 00:14:43.253 }, 00:14:43.253 { 00:14:43.253 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:43.253 "subtype": "NVMe", 00:14:43.253 "listen_addresses": [ 00:14:43.253 { 00:14:43.253 "trtype": "VFIOUSER", 00:14:43.253 "adrfam": "IPv4", 00:14:43.253 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:43.253 "trsvcid": "0" 00:14:43.253 } 00:14:43.253 ], 00:14:43.253 "allow_any_host": true, 00:14:43.253 "hosts": [], 00:14:43.253 "serial_number": "SPDK2", 00:14:43.253 "model_number": "SPDK bdev Controller", 00:14:43.253 "max_namespaces": 32, 00:14:43.253 "min_cntlid": 1, 00:14:43.253 "max_cntlid": 65519, 00:14:43.253 "namespaces": [ 00:14:43.253 { 00:14:43.253 "nsid": 1, 00:14:43.253 "bdev_name": "Malloc2", 00:14:43.253 "name": "Malloc2", 00:14:43.253 "nguid": "E8ECADE2600F420B85244494E8513CED", 00:14:43.253 "uuid": "e8ecade2-600f-420b-8524-4494e8513ced" 00:14:43.253 } 00:14:43.253 ] 00:14:43.253 } 00:14:43.253 ] 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1634531 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:43.253 19:06:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:43.511 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.511 [2024-07-24 19:06:49.136129] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:43.768 Malloc3 00:14:43.768 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:44.026 [2024-07-24 19:06:49.685030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.026 19:06:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:44.284 Asynchronous Event Request test 00:14:44.284 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.284 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:44.284 Registering asynchronous event callbacks... 00:14:44.284 Starting namespace attribute notice tests for all controllers... 00:14:44.284 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:44.284 aer_cb - Changed Namespace 00:14:44.284 Cleaning up... 00:14:44.543 [ 00:14:44.543 { 00:14:44.543 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:44.543 "subtype": "Discovery", 00:14:44.543 "listen_addresses": [], 00:14:44.543 "allow_any_host": true, 00:14:44.543 "hosts": [] 00:14:44.543 }, 00:14:44.543 { 00:14:44.543 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:44.543 "subtype": "NVMe", 00:14:44.543 "listen_addresses": [ 00:14:44.543 { 00:14:44.543 "trtype": "VFIOUSER", 00:14:44.543 "adrfam": "IPv4", 00:14:44.543 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:44.543 "trsvcid": "0" 00:14:44.543 } 00:14:44.543 ], 00:14:44.543 "allow_any_host": true, 00:14:44.543 "hosts": [], 00:14:44.543 "serial_number": "SPDK1", 00:14:44.543 "model_number": "SPDK bdev Controller", 00:14:44.543 "max_namespaces": 32, 00:14:44.543 "min_cntlid": 1, 00:14:44.543 "max_cntlid": 65519, 00:14:44.543 "namespaces": [ 00:14:44.543 { 00:14:44.543 "nsid": 1, 00:14:44.543 "bdev_name": "Malloc1", 00:14:44.543 "name": "Malloc1", 00:14:44.543 "nguid": "46E893650AEA40C3AC4EF3C80FE05D7E", 00:14:44.543 "uuid": "46e89365-0aea-40c3-ac4e-f3c80fe05d7e" 00:14:44.543 }, 00:14:44.543 { 00:14:44.543 "nsid": 2, 00:14:44.543 "bdev_name": "Malloc3", 00:14:44.543 "name": "Malloc3", 00:14:44.543 "nguid": "D73A49A5D1DF4C1AA4D6C284353BC7D5", 00:14:44.543 "uuid": "d73a49a5-d1df-4c1a-a4d6-c284353bc7d5" 00:14:44.543 } 00:14:44.543 ] 00:14:44.543 }, 00:14:44.543 { 00:14:44.543 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:44.544 "subtype": "NVMe", 00:14:44.544 "listen_addresses": [ 00:14:44.544 { 00:14:44.544 "trtype": "VFIOUSER", 00:14:44.544 "adrfam": "IPv4", 00:14:44.544 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:44.544 "trsvcid": "0" 00:14:44.544 } 00:14:44.544 ], 00:14:44.544 "allow_any_host": true, 00:14:44.544 "hosts": [], 00:14:44.544 "serial_number": "SPDK2", 00:14:44.544 "model_number": "SPDK bdev Controller", 00:14:44.544 "max_namespaces": 32, 00:14:44.544 "min_cntlid": 1, 00:14:44.544 "max_cntlid": 65519, 00:14:44.544 "namespaces": [ 00:14:44.544 { 00:14:44.544 "nsid": 1, 00:14:44.544 "bdev_name": "Malloc2", 00:14:44.544 "name": "Malloc2", 00:14:44.544 "nguid": "E8ECADE2600F420B85244494E8513CED", 00:14:44.544 "uuid": "e8ecade2-600f-420b-8524-4494e8513ced" 00:14:44.544 } 00:14:44.544 ] 00:14:44.544 } 00:14:44.544 ] 00:14:44.544 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1634531 00:14:44.544 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:44.544 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:44.544 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:44.544 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:44.544 [2024-07-24 19:06:50.118549] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:14:44.544 [2024-07-24 19:06:50.118607] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1634666 ] 00:14:44.544 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.544 [2024-07-24 19:06:50.163595] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:44.544 [2024-07-24 19:06:50.166145] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:44.544 [2024-07-24 19:06:50.166187] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff9ba0c4000 00:14:44.544 [2024-07-24 19:06:50.167142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.168152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.169158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.170162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.171167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.172176] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.173189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.174199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:44.544 [2024-07-24 19:06:50.175216] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:44.544 [2024-07-24 19:06:50.175246] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff9ba0b9000 00:14:44.544 [2024-07-24 19:06:50.176825] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:44.544 [2024-07-24 19:06:50.197940] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:44.544 [2024-07-24 19:06:50.197991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:44.544 [2024-07-24 19:06:50.203140] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:44.544 [2024-07-24 19:06:50.203218] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:44.544 [2024-07-24 19:06:50.203343] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:44.544 [2024-07-24 19:06:50.203379] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:44.544 [2024-07-24 19:06:50.203394] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:44.544 [2024-07-24 19:06:50.204139] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:44.544 [2024-07-24 19:06:50.204176] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:44.544 [2024-07-24 19:06:50.204197] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:44.544 [2024-07-24 19:06:50.205144] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:44.544 [2024-07-24 19:06:50.205172] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:44.544 [2024-07-24 19:06:50.205204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:44.544 [2024-07-24 19:06:50.206151] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:44.544 [2024-07-24 19:06:50.206180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:44.544 [2024-07-24 19:06:50.207155] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:44.544 [2024-07-24 19:06:50.207184] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:44.544 [2024-07-24 19:06:50.207197] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:44.544 [2024-07-24 19:06:50.207214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:44.544 [2024-07-24 19:06:50.207327] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:44.544 [2024-07-24 19:06:50.207338] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:44.544 [2024-07-24 19:06:50.207350] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:44.544 [2024-07-24 19:06:50.208164] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:44.544 [2024-07-24 19:06:50.209173] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:44.544 [2024-07-24 19:06:50.210181] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:44.544 [2024-07-24 19:06:50.211184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:44.544 [2024-07-24 19:06:50.211290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:44.544 [2024-07-24 19:06:50.212201] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:44.544 [2024-07-24 19:06:50.212229] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:44.544 [2024-07-24 19:06:50.212242] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:44.544 [2024-07-24 19:06:50.212275] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:44.544 [2024-07-24 19:06:50.212301] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:44.544 [2024-07-24 19:06:50.212335] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.544 [2024-07-24 19:06:50.212349] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.544 [2024-07-24 19:06:50.212358] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.544 [2024-07-24 19:06:50.212382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.544 [2024-07-24 19:06:50.220449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:44.544 [2024-07-24 19:06:50.220480] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:44.544 [2024-07-24 19:06:50.220492] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:44.544 [2024-07-24 19:06:50.220502] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:44.544 [2024-07-24 19:06:50.220513] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:44.544 [2024-07-24 19:06:50.220524] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:44.544 [2024-07-24 19:06:50.220534] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:44.544 [2024-07-24 19:06:50.220545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:44.544 [2024-07-24 19:06:50.220563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:44.545 [2024-07-24 19:06:50.220591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:44.545 [2024-07-24 19:06:50.228446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:44.545 [2024-07-24 19:06:50.228485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.545 [2024-07-24 19:06:50.228505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.545 [2024-07-24 19:06:50.228521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.545 [2024-07-24 19:06:50.228538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:44.545 [2024-07-24 19:06:50.228549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:44.545 [2024-07-24 19:06:50.228570] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:44.545 [2024-07-24 19:06:50.228590] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:44.545 [2024-07-24 19:06:50.236446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:44.545 [2024-07-24 19:06:50.236471] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:44.545 [2024-07-24 19:06:50.236484] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:44.545 [2024-07-24 19:06:50.236507] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:44.545 [2024-07-24 19:06:50.236522] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:44.545 [2024-07-24 19:06:50.236542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.244445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.244574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.244601] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.244620] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:44.820 [2024-07-24 19:06:50.244632] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:44.820 [2024-07-24 19:06:50.244641] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.820 [2024-07-24 19:06:50.244655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.252460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.252494] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:44.820 [2024-07-24 19:06:50.252519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.252539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.252556] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.820 [2024-07-24 19:06:50.252568] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.820 [2024-07-24 19:06:50.252577] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.820 [2024-07-24 19:06:50.252590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.260446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.260488] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.260511] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.260530] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:44.820 [2024-07-24 19:06:50.260541] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.820 [2024-07-24 19:06:50.260550] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.820 [2024-07-24 19:06:50.260563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.268448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.268477] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.268494] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.268515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.268536] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.268549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.268566] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.268578] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:44.820 [2024-07-24 19:06:50.268588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:44.820 [2024-07-24 19:06:50.268600] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:44.820 [2024-07-24 19:06:50.268636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.275472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.275509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.284446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.284482] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.292443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.292479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.300448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.300501] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:44.820 [2024-07-24 19:06:50.300517] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:44.820 [2024-07-24 19:06:50.300526] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:44.820 [2024-07-24 19:06:50.300535] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:44.820 [2024-07-24 19:06:50.300543] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:44.820 [2024-07-24 19:06:50.300556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:44.820 [2024-07-24 19:06:50.300573] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:44.820 [2024-07-24 19:06:50.300584] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:44.820 [2024-07-24 19:06:50.300593] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.820 [2024-07-24 19:06:50.300605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.300621] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:44.820 [2024-07-24 19:06:50.300632] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:44.820 [2024-07-24 19:06:50.300640] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.820 [2024-07-24 19:06:50.300652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.300669] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:44.820 [2024-07-24 19:06:50.300694] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:44.820 [2024-07-24 19:06:50.300703] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:44.820 [2024-07-24 19:06:50.300716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:44.820 [2024-07-24 19:06:50.308444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.308497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:44.820 [2024-07-24 19:06:50.308524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:44.821 [2024-07-24 19:06:50.308542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:44.821 ===================================================== 00:14:44.821 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:44.821 ===================================================== 00:14:44.821 Controller Capabilities/Features 00:14:44.821 ================================ 00:14:44.821 Vendor ID: 4e58 00:14:44.821 Subsystem Vendor ID: 4e58 00:14:44.821 Serial Number: SPDK2 00:14:44.821 Model Number: SPDK bdev Controller 00:14:44.821 Firmware Version: 24.09 00:14:44.821 Recommended Arb Burst: 6 00:14:44.821 IEEE OUI Identifier: 8d 6b 50 00:14:44.821 Multi-path I/O 00:14:44.821 May have multiple subsystem ports: Yes 00:14:44.821 May have multiple controllers: Yes 00:14:44.821 Associated with SR-IOV VF: No 00:14:44.821 Max Data Transfer Size: 131072 00:14:44.821 Max Number of Namespaces: 32 00:14:44.821 Max Number of I/O Queues: 127 00:14:44.821 NVMe Specification Version (VS): 1.3 00:14:44.821 NVMe Specification Version (Identify): 1.3 00:14:44.821 Maximum Queue Entries: 256 00:14:44.821 Contiguous Queues Required: Yes 00:14:44.821 Arbitration Mechanisms Supported 00:14:44.821 Weighted Round Robin: Not Supported 00:14:44.821 Vendor Specific: Not Supported 00:14:44.821 Reset Timeout: 15000 ms 00:14:44.821 Doorbell Stride: 4 bytes 00:14:44.821 NVM Subsystem Reset: Not Supported 00:14:44.821 Command Sets Supported 00:14:44.821 NVM Command Set: Supported 00:14:44.821 Boot Partition: Not Supported 00:14:44.821 Memory Page Size Minimum: 4096 bytes 00:14:44.821 Memory Page Size Maximum: 4096 bytes 00:14:44.821 Persistent Memory Region: Not Supported 00:14:44.821 Optional Asynchronous Events Supported 00:14:44.821 Namespace Attribute Notices: Supported 00:14:44.821 Firmware Activation Notices: Not Supported 00:14:44.821 ANA Change Notices: Not Supported 00:14:44.821 PLE Aggregate Log Change Notices: Not Supported 00:14:44.821 LBA Status Info Alert Notices: Not Supported 00:14:44.821 EGE Aggregate Log Change Notices: Not Supported 00:14:44.821 Normal NVM Subsystem Shutdown event: Not Supported 00:14:44.821 Zone Descriptor Change Notices: Not Supported 00:14:44.821 Discovery Log Change Notices: Not Supported 00:14:44.821 Controller Attributes 00:14:44.821 128-bit Host Identifier: Supported 00:14:44.821 Non-Operational Permissive Mode: Not Supported 00:14:44.821 NVM Sets: Not Supported 00:14:44.821 Read Recovery Levels: Not Supported 00:14:44.821 Endurance Groups: Not Supported 00:14:44.821 Predictable Latency Mode: Not Supported 00:14:44.821 Traffic Based Keep ALive: Not Supported 00:14:44.821 Namespace Granularity: Not Supported 00:14:44.821 SQ Associations: Not Supported 00:14:44.821 UUID List: Not Supported 00:14:44.821 Multi-Domain Subsystem: Not Supported 00:14:44.821 Fixed Capacity Management: Not Supported 00:14:44.821 Variable Capacity Management: Not Supported 00:14:44.821 Delete Endurance Group: Not Supported 00:14:44.821 Delete NVM Set: Not Supported 00:14:44.821 Extended LBA Formats Supported: Not Supported 00:14:44.821 Flexible Data Placement Supported: Not Supported 00:14:44.821 00:14:44.821 Controller Memory Buffer Support 00:14:44.821 ================================ 00:14:44.821 Supported: No 00:14:44.821 00:14:44.821 Persistent Memory Region Support 00:14:44.821 ================================ 00:14:44.821 Supported: No 00:14:44.821 00:14:44.821 Admin Command Set Attributes 00:14:44.821 ============================ 00:14:44.821 Security Send/Receive: Not Supported 00:14:44.821 Format NVM: Not Supported 00:14:44.821 Firmware Activate/Download: Not Supported 00:14:44.821 Namespace Management: Not Supported 00:14:44.821 Device Self-Test: Not Supported 00:14:44.821 Directives: Not Supported 00:14:44.821 NVMe-MI: Not Supported 00:14:44.821 Virtualization Management: Not Supported 00:14:44.821 Doorbell Buffer Config: Not Supported 00:14:44.821 Get LBA Status Capability: Not Supported 00:14:44.821 Command & Feature Lockdown Capability: Not Supported 00:14:44.821 Abort Command Limit: 4 00:14:44.821 Async Event Request Limit: 4 00:14:44.821 Number of Firmware Slots: N/A 00:14:44.821 Firmware Slot 1 Read-Only: N/A 00:14:44.821 Firmware Activation Without Reset: N/A 00:14:44.821 Multiple Update Detection Support: N/A 00:14:44.821 Firmware Update Granularity: No Information Provided 00:14:44.821 Per-Namespace SMART Log: No 00:14:44.821 Asymmetric Namespace Access Log Page: Not Supported 00:14:44.821 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:44.821 Command Effects Log Page: Supported 00:14:44.821 Get Log Page Extended Data: Supported 00:14:44.821 Telemetry Log Pages: Not Supported 00:14:44.821 Persistent Event Log Pages: Not Supported 00:14:44.821 Supported Log Pages Log Page: May Support 00:14:44.821 Commands Supported & Effects Log Page: Not Supported 00:14:44.821 Feature Identifiers & Effects Log Page:May Support 00:14:44.821 NVMe-MI Commands & Effects Log Page: May Support 00:14:44.821 Data Area 4 for Telemetry Log: Not Supported 00:14:44.821 Error Log Page Entries Supported: 128 00:14:44.821 Keep Alive: Supported 00:14:44.821 Keep Alive Granularity: 10000 ms 00:14:44.821 00:14:44.821 NVM Command Set Attributes 00:14:44.821 ========================== 00:14:44.821 Submission Queue Entry Size 00:14:44.821 Max: 64 00:14:44.821 Min: 64 00:14:44.821 Completion Queue Entry Size 00:14:44.821 Max: 16 00:14:44.821 Min: 16 00:14:44.821 Number of Namespaces: 32 00:14:44.821 Compare Command: Supported 00:14:44.821 Write Uncorrectable Command: Not Supported 00:14:44.821 Dataset Management Command: Supported 00:14:44.821 Write Zeroes Command: Supported 00:14:44.821 Set Features Save Field: Not Supported 00:14:44.821 Reservations: Not Supported 00:14:44.821 Timestamp: Not Supported 00:14:44.821 Copy: Supported 00:14:44.821 Volatile Write Cache: Present 00:14:44.821 Atomic Write Unit (Normal): 1 00:14:44.821 Atomic Write Unit (PFail): 1 00:14:44.821 Atomic Compare & Write Unit: 1 00:14:44.821 Fused Compare & Write: Supported 00:14:44.821 Scatter-Gather List 00:14:44.821 SGL Command Set: Supported (Dword aligned) 00:14:44.821 SGL Keyed: Not Supported 00:14:44.821 SGL Bit Bucket Descriptor: Not Supported 00:14:44.821 SGL Metadata Pointer: Not Supported 00:14:44.821 Oversized SGL: Not Supported 00:14:44.821 SGL Metadata Address: Not Supported 00:14:44.821 SGL Offset: Not Supported 00:14:44.821 Transport SGL Data Block: Not Supported 00:14:44.821 Replay Protected Memory Block: Not Supported 00:14:44.821 00:14:44.821 Firmware Slot Information 00:14:44.821 ========================= 00:14:44.821 Active slot: 1 00:14:44.821 Slot 1 Firmware Revision: 24.09 00:14:44.821 00:14:44.821 00:14:44.821 Commands Supported and Effects 00:14:44.821 ============================== 00:14:44.821 Admin Commands 00:14:44.821 -------------- 00:14:44.821 Get Log Page (02h): Supported 00:14:44.821 Identify (06h): Supported 00:14:44.821 Abort (08h): Supported 00:14:44.821 Set Features (09h): Supported 00:14:44.821 Get Features (0Ah): Supported 00:14:44.821 Asynchronous Event Request (0Ch): Supported 00:14:44.821 Keep Alive (18h): Supported 00:14:44.821 I/O Commands 00:14:44.821 ------------ 00:14:44.821 Flush (00h): Supported LBA-Change 00:14:44.821 Write (01h): Supported LBA-Change 00:14:44.821 Read (02h): Supported 00:14:44.821 Compare (05h): Supported 00:14:44.821 Write Zeroes (08h): Supported LBA-Change 00:14:44.821 Dataset Management (09h): Supported LBA-Change 00:14:44.821 Copy (19h): Supported LBA-Change 00:14:44.821 00:14:44.821 Error Log 00:14:44.821 ========= 00:14:44.821 00:14:44.821 Arbitration 00:14:44.821 =========== 00:14:44.821 Arbitration Burst: 1 00:14:44.821 00:14:44.821 Power Management 00:14:44.821 ================ 00:14:44.821 Number of Power States: 1 00:14:44.821 Current Power State: Power State #0 00:14:44.821 Power State #0: 00:14:44.821 Max Power: 0.00 W 00:14:44.821 Non-Operational State: Operational 00:14:44.821 Entry Latency: Not Reported 00:14:44.821 Exit Latency: Not Reported 00:14:44.821 Relative Read Throughput: 0 00:14:44.821 Relative Read Latency: 0 00:14:44.821 Relative Write Throughput: 0 00:14:44.821 Relative Write Latency: 0 00:14:44.821 Idle Power: Not Reported 00:14:44.821 Active Power: Not Reported 00:14:44.821 Non-Operational Permissive Mode: Not Supported 00:14:44.821 00:14:44.821 Health Information 00:14:44.821 ================== 00:14:44.821 Critical Warnings: 00:14:44.821 Available Spare Space: OK 00:14:44.821 Temperature: OK 00:14:44.821 Device Reliability: OK 00:14:44.821 Read Only: No 00:14:44.822 Volatile Memory Backup: OK 00:14:44.822 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:44.822 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:44.822 Available Spare: 0% 00:14:44.822 Available Sp[2024-07-24 19:06:50.308724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:44.822 [2024-07-24 19:06:50.316445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:44.822 [2024-07-24 19:06:50.316528] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:44.822 [2024-07-24 19:06:50.316553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.822 [2024-07-24 19:06:50.316568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.822 [2024-07-24 19:06:50.316582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.822 [2024-07-24 19:06:50.316596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.822 [2024-07-24 19:06:50.316692] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:44.822 [2024-07-24 19:06:50.316722] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:44.822 [2024-07-24 19:06:50.317693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:44.822 [2024-07-24 19:06:50.317803] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:44.822 [2024-07-24 19:06:50.317824] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:44.822 [2024-07-24 19:06:50.318702] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:44.822 [2024-07-24 19:06:50.318743] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:44.822 [2024-07-24 19:06:50.318821] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:44.822 [2024-07-24 19:06:50.320496] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:44.822 are Threshold: 0% 00:14:44.822 Life Percentage Used: 0% 00:14:44.822 Data Units Read: 0 00:14:44.822 Data Units Written: 0 00:14:44.822 Host Read Commands: 0 00:14:44.822 Host Write Commands: 0 00:14:44.822 Controller Busy Time: 0 minutes 00:14:44.822 Power Cycles: 0 00:14:44.822 Power On Hours: 0 hours 00:14:44.822 Unsafe Shutdowns: 0 00:14:44.822 Unrecoverable Media Errors: 0 00:14:44.822 Lifetime Error Log Entries: 0 00:14:44.822 Warning Temperature Time: 0 minutes 00:14:44.822 Critical Temperature Time: 0 minutes 00:14:44.822 00:14:44.822 Number of Queues 00:14:44.822 ================ 00:14:44.822 Number of I/O Submission Queues: 127 00:14:44.822 Number of I/O Completion Queues: 127 00:14:44.822 00:14:44.822 Active Namespaces 00:14:44.822 ================= 00:14:44.822 Namespace ID:1 00:14:44.822 Error Recovery Timeout: Unlimited 00:14:44.822 Command Set Identifier: NVM (00h) 00:14:44.822 Deallocate: Supported 00:14:44.822 Deallocated/Unwritten Error: Not Supported 00:14:44.822 Deallocated Read Value: Unknown 00:14:44.822 Deallocate in Write Zeroes: Not Supported 00:14:44.822 Deallocated Guard Field: 0xFFFF 00:14:44.822 Flush: Supported 00:14:44.822 Reservation: Supported 00:14:44.822 Namespace Sharing Capabilities: Multiple Controllers 00:14:44.822 Size (in LBAs): 131072 (0GiB) 00:14:44.822 Capacity (in LBAs): 131072 (0GiB) 00:14:44.822 Utilization (in LBAs): 131072 (0GiB) 00:14:44.822 NGUID: E8ECADE2600F420B85244494E8513CED 00:14:44.822 UUID: e8ecade2-600f-420b-8524-4494e8513ced 00:14:44.822 Thin Provisioning: Not Supported 00:14:44.822 Per-NS Atomic Units: Yes 00:14:44.822 Atomic Boundary Size (Normal): 0 00:14:44.822 Atomic Boundary Size (PFail): 0 00:14:44.822 Atomic Boundary Offset: 0 00:14:44.822 Maximum Single Source Range Length: 65535 00:14:44.822 Maximum Copy Length: 65535 00:14:44.822 Maximum Source Range Count: 1 00:14:44.822 NGUID/EUI64 Never Reused: No 00:14:44.822 Namespace Write Protected: No 00:14:44.822 Number of LBA Formats: 1 00:14:44.822 Current LBA Format: LBA Format #00 00:14:44.822 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:44.822 00:14:44.822 19:06:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:44.822 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.094 [2024-07-24 19:06:50.659277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.360 Initializing NVMe Controllers 00:14:50.360 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.360 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:50.360 Initialization complete. Launching workers. 00:14:50.360 ======================================================== 00:14:50.360 Latency(us) 00:14:50.360 Device Information : IOPS MiB/s Average min max 00:14:50.360 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24078.19 94.06 5315.94 1674.81 9641.05 00:14:50.360 ======================================================== 00:14:50.360 Total : 24078.19 94.06 5315.94 1674.81 9641.05 00:14:50.360 00:14:50.360 [2024-07-24 19:06:55.762837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.360 19:06:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:50.360 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.360 [2024-07-24 19:06:56.044743] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.626 Initializing NVMe Controllers 00:14:55.626 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:55.626 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:55.626 Initialization complete. Launching workers. 00:14:55.626 ======================================================== 00:14:55.626 Latency(us) 00:14:55.626 Device Information : IOPS MiB/s Average min max 00:14:55.626 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24087.59 94.09 5317.33 1653.77 9556.78 00:14:55.626 ======================================================== 00:14:55.626 Total : 24087.59 94.09 5317.33 1653.77 9556.78 00:14:55.626 00:14:55.626 [2024-07-24 19:07:01.067618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.626 19:07:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:55.626 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.884 [2024-07-24 19:07:01.337419] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.152 [2024-07-24 19:07:06.479601] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.152 Initializing NVMe Controllers 00:15:01.152 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.152 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.152 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:01.152 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:01.152 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:01.152 Initialization complete. Launching workers. 00:15:01.152 Starting thread on core 2 00:15:01.152 Starting thread on core 3 00:15:01.152 Starting thread on core 1 00:15:01.152 19:07:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:01.152 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.410 [2024-07-24 19:07:06.848653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.595 [2024-07-24 19:07:10.424839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.595 Initializing NVMe Controllers 00:15:05.595 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:05.595 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:05.595 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:05.595 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:05.595 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:05.595 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:05.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:05.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:05.595 Initialization complete. Launching workers. 00:15:05.595 Starting thread on core 1 with urgent priority queue 00:15:05.595 Starting thread on core 2 with urgent priority queue 00:15:05.595 Starting thread on core 3 with urgent priority queue 00:15:05.595 Starting thread on core 0 with urgent priority queue 00:15:05.595 SPDK bdev Controller (SPDK2 ) core 0: 1314.33 IO/s 76.08 secs/100000 ios 00:15:05.595 SPDK bdev Controller (SPDK2 ) core 1: 1286.00 IO/s 77.76 secs/100000 ios 00:15:05.595 SPDK bdev Controller (SPDK2 ) core 2: 1258.00 IO/s 79.49 secs/100000 ios 00:15:05.595 SPDK bdev Controller (SPDK2 ) core 3: 1280.67 IO/s 78.08 secs/100000 ios 00:15:05.595 ======================================================== 00:15:05.595 00:15:05.595 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:05.595 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.595 [2024-07-24 19:07:10.798184] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:05.595 Initializing NVMe Controllers 00:15:05.595 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:05.595 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:05.595 Namespace ID: 1 size: 0GB 00:15:05.595 Initialization complete. 00:15:05.595 INFO: using host memory buffer for IO 00:15:05.595 Hello world! 00:15:05.595 [2024-07-24 19:07:10.808469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:05.595 19:07:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:05.595 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.595 [2024-07-24 19:07:11.155318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.970 Initializing NVMe Controllers 00:15:06.970 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.970 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:06.970 Initialization complete. Launching workers. 00:15:06.970 submit (in ns) avg, min, max = 11201.1, 4831.1, 4008048.9 00:15:06.970 complete (in ns) avg, min, max = 34621.4, 2881.5, 4017380.0 00:15:06.970 00:15:06.970 Submit histogram 00:15:06.970 ================ 00:15:06.970 Range in us Cumulative Count 00:15:06.970 4.812 - 4.836: 0.0103% ( 1) 00:15:06.970 4.836 - 4.859: 0.0206% ( 1) 00:15:06.970 4.859 - 4.883: 0.0308% ( 1) 00:15:06.970 4.883 - 4.907: 0.0514% ( 2) 00:15:06.970 4.907 - 4.930: 0.0822% ( 3) 00:15:06.970 4.930 - 4.954: 0.1439% ( 6) 00:15:06.970 4.954 - 4.978: 0.1953% ( 5) 00:15:06.970 4.978 - 5.001: 0.2672% ( 7) 00:15:06.970 5.001 - 5.025: 0.3494% ( 8) 00:15:06.970 5.025 - 5.049: 0.4522% ( 10) 00:15:06.970 5.049 - 5.073: 1.3566% ( 88) 00:15:06.970 5.073 - 5.096: 2.8469% ( 145) 00:15:06.970 5.096 - 5.120: 5.2929% ( 238) 00:15:06.970 5.120 - 5.144: 10.0308% ( 461) 00:15:06.970 5.144 - 5.167: 15.4573% ( 528) 00:15:06.970 5.167 - 5.191: 22.1994% ( 656) 00:15:06.970 5.191 - 5.215: 29.3628% ( 697) 00:15:06.970 5.215 - 5.239: 35.7451% ( 621) 00:15:06.970 5.239 - 5.262: 41.6341% ( 573) 00:15:06.970 5.262 - 5.286: 45.6629% ( 392) 00:15:06.970 5.286 - 5.310: 49.0339% ( 328) 00:15:06.970 5.310 - 5.333: 52.5180% ( 339) 00:15:06.970 5.333 - 5.357: 56.2590% ( 364) 00:15:06.970 5.357 - 5.381: 60.4933% ( 412) 00:15:06.970 5.381 - 5.404: 64.5940% ( 399) 00:15:06.970 5.404 - 5.428: 68.2220% ( 353) 00:15:06.970 5.428 - 5.452: 70.9764% ( 268) 00:15:06.970 5.452 - 5.476: 73.0421% ( 201) 00:15:06.970 5.476 - 5.499: 74.5118% ( 143) 00:15:06.970 5.499 - 5.523: 75.5704% ( 103) 00:15:06.970 5.523 - 5.547: 76.4337% ( 84) 00:15:06.971 5.547 - 5.570: 77.1942% ( 74) 00:15:06.971 5.570 - 5.594: 77.7184% ( 51) 00:15:06.971 5.594 - 5.618: 78.1501% ( 42) 00:15:06.971 5.618 - 5.641: 78.3042% ( 15) 00:15:06.971 5.641 - 5.665: 78.4481% ( 14) 00:15:06.971 5.665 - 5.689: 79.0031% ( 54) 00:15:06.971 5.689 - 5.713: 81.0483% ( 199) 00:15:06.971 5.713 - 5.736: 86.3618% ( 517) 00:15:06.971 5.736 - 5.760: 90.0822% ( 362) 00:15:06.971 5.760 - 5.784: 94.6454% ( 444) 00:15:06.971 5.784 - 5.807: 95.8890% ( 121) 00:15:06.971 5.807 - 5.831: 96.2282% ( 33) 00:15:06.971 5.831 - 5.855: 96.4029% ( 17) 00:15:06.971 5.855 - 5.879: 96.5365% ( 13) 00:15:06.971 5.879 - 5.902: 96.7112% ( 17) 00:15:06.971 5.902 - 5.926: 96.8243% ( 11) 00:15:06.971 5.926 - 5.950: 96.8756% ( 5) 00:15:06.971 5.950 - 5.973: 96.9887% ( 11) 00:15:06.971 5.973 - 5.997: 97.1017% ( 11) 00:15:06.971 5.997 - 6.021: 97.3587% ( 25) 00:15:06.971 6.021 - 6.044: 97.5334% ( 17) 00:15:06.971 6.044 - 6.068: 97.6156% ( 8) 00:15:06.971 6.068 - 6.116: 97.6773% ( 6) 00:15:06.971 6.116 - 6.163: 97.7287% ( 5) 00:15:06.971 6.163 - 6.210: 97.7903% ( 6) 00:15:06.971 6.210 - 6.258: 97.8212% ( 3) 00:15:06.971 6.258 - 6.305: 97.9239% ( 10) 00:15:06.971 6.305 - 6.353: 98.0781% ( 15) 00:15:06.971 6.353 - 6.400: 98.1398% ( 6) 00:15:06.971 6.400 - 6.447: 98.2734% ( 13) 00:15:06.971 6.447 - 6.495: 98.4070% ( 13) 00:15:06.971 6.495 - 6.542: 98.4687% ( 6) 00:15:06.971 6.542 - 6.590: 98.5509% ( 8) 00:15:06.971 6.590 - 6.637: 98.6023% ( 5) 00:15:06.971 6.637 - 6.684: 98.6639% ( 6) 00:15:06.971 6.684 - 6.732: 98.6742% ( 1) 00:15:06.971 6.732 - 6.779: 98.6845% ( 1) 00:15:06.971 6.779 - 6.827: 98.7050% ( 2) 00:15:06.971 6.827 - 6.874: 98.7359% ( 3) 00:15:06.971 6.874 - 6.921: 98.7564% ( 2) 00:15:06.971 6.921 - 6.969: 98.8284% ( 7) 00:15:06.971 6.969 - 7.016: 98.8592% ( 3) 00:15:06.971 7.064 - 7.111: 98.8900% ( 3) 00:15:06.971 7.111 - 7.159: 98.9209% ( 3) 00:15:06.971 7.159 - 7.206: 98.9311% ( 1) 00:15:06.971 7.206 - 7.253: 98.9723% ( 4) 00:15:06.971 7.253 - 7.301: 98.9825% ( 1) 00:15:06.971 7.301 - 7.348: 98.9928% ( 1) 00:15:06.971 7.396 - 7.443: 99.0236% ( 3) 00:15:06.971 7.443 - 7.490: 99.0545% ( 3) 00:15:06.971 7.633 - 7.680: 99.0750% ( 2) 00:15:06.971 7.727 - 7.775: 99.0853% ( 1) 00:15:06.971 7.775 - 7.822: 99.0956% ( 1) 00:15:06.971 7.822 - 7.870: 99.1059% ( 1) 00:15:06.971 8.059 - 8.107: 99.1161% ( 1) 00:15:06.971 8.296 - 8.344: 99.1264% ( 1) 00:15:06.971 8.391 - 8.439: 99.1367% ( 1) 00:15:06.971 8.818 - 8.865: 99.1470% ( 1) 00:15:06.971 9.102 - 9.150: 99.1572% ( 1) 00:15:06.971 9.292 - 9.339: 99.1778% ( 2) 00:15:06.971 9.481 - 9.529: 99.1984% ( 2) 00:15:06.971 9.529 - 9.576: 99.2086% ( 1) 00:15:06.971 9.576 - 9.624: 99.2189% ( 1) 00:15:06.971 9.861 - 9.908: 99.2292% ( 1) 00:15:06.971 9.908 - 9.956: 99.2395% ( 1) 00:15:06.971 10.003 - 10.050: 99.2497% ( 1) 00:15:06.971 10.240 - 10.287: 99.2600% ( 1) 00:15:06.971 10.287 - 10.335: 99.2703% ( 1) 00:15:06.971 10.335 - 10.382: 99.2806% ( 1) 00:15:06.971 10.382 - 10.430: 99.2909% ( 1) 00:15:06.971 10.430 - 10.477: 99.3114% ( 2) 00:15:06.971 10.477 - 10.524: 99.3217% ( 1) 00:15:06.971 10.714 - 10.761: 99.3320% ( 1) 00:15:06.971 10.809 - 10.856: 99.3422% ( 1) 00:15:06.971 10.904 - 10.951: 99.3628% ( 2) 00:15:06.971 10.999 - 11.046: 99.3834% ( 2) 00:15:06.971 11.141 - 11.188: 99.3936% ( 1) 00:15:06.971 11.283 - 11.330: 99.4039% ( 1) 00:15:06.971 11.425 - 11.473: 99.4245% ( 2) 00:15:06.971 11.473 - 11.520: 99.4347% ( 1) 00:15:06.971 11.567 - 11.615: 99.4450% ( 1) 00:15:06.971 11.662 - 11.710: 99.4553% ( 1) 00:15:06.971 11.710 - 11.757: 99.4656% ( 1) 00:15:06.971 11.757 - 11.804: 99.4758% ( 1) 00:15:06.971 11.804 - 11.852: 99.4861% ( 1) 00:15:06.971 11.947 - 11.994: 99.4964% ( 1) 00:15:06.971 11.994 - 12.041: 99.5067% ( 1) 00:15:06.971 12.089 - 12.136: 99.5272% ( 2) 00:15:06.971 12.136 - 12.231: 99.5478% ( 2) 00:15:06.971 12.231 - 12.326: 99.5581% ( 1) 00:15:06.971 12.326 - 12.421: 99.5889% ( 3) 00:15:06.971 12.610 - 12.705: 99.5992% ( 1) 00:15:06.971 12.705 - 12.800: 99.6095% ( 1) 00:15:06.971 12.990 - 13.084: 99.6197% ( 1) 00:15:06.971 13.179 - 13.274: 99.6300% ( 1) 00:15:06.971 13.274 - 13.369: 99.6403% ( 1) 00:15:06.971 13.653 - 13.748: 99.6506% ( 1) 00:15:06.971 13.748 - 13.843: 99.6711% ( 2) 00:15:06.971 13.938 - 14.033: 99.6814% ( 1) 00:15:06.971 14.222 - 14.317: 99.7020% ( 2) 00:15:06.971 14.601 - 14.696: 99.7122% ( 1) 00:15:06.971 14.696 - 14.791: 99.7225% ( 1) 00:15:06.971 14.886 - 14.981: 99.7328% ( 1) 00:15:06.971 15.360 - 15.455: 99.7431% ( 1) 00:15:06.971 15.550 - 15.644: 99.7533% ( 1) 00:15:06.971 15.834 - 15.929: 99.7636% ( 1) 00:15:06.971 16.024 - 16.119: 99.7739% ( 1) 00:15:06.971 17.161 - 17.256: 99.7842% ( 1) 00:15:06.971 20.290 - 20.385: 99.7945% ( 1) 00:15:06.971 20.385 - 20.480: 99.8150% ( 2) 00:15:06.971 20.480 - 20.575: 99.8253% ( 1) 00:15:06.971 20.954 - 21.049: 99.8356% ( 1) 00:15:06.971 22.471 - 22.566: 99.8458% ( 1) 00:15:06.971 23.704 - 23.799: 99.8561% ( 1) 00:15:06.971 3980.705 - 4004.978: 99.8972% ( 4) 00:15:06.971 4004.978 - 4029.250: 100.0000% ( 10) 00:15:06.971 00:15:06.971 Complete histogram 00:15:06.971 ================== 00:15:06.971 Range in us Cumulative Count 00:15:06.971 2.880 - 2.892: 0.2569% ( 25) 00:15:06.971 2.892 - 2.904: 0.6064% ( 34) 00:15:06.971 2.904 - 2.916: 0.7503% ( 14) 00:15:06.971 2.916 - 2.927: 0.8016% ( 5) 00:15:06.971 2.927 - 2.939: 0.8736% ( 7) 00:15:06.971 2.939 - 2.951: 1.0072% ( 13) 00:15:06.971 2.951 - 2.963: 1.0791% ( 7) 00:15:06.971 2.963 - 2.975: 1.1202% ( 4) 00:15:06.971 2.975 - 2.987: 1.1408% ( 2) 00:15:06.971 2.987 - 2.999: 1.1614% ( 2) 00:15:06.971 2.999 - 3.010: 2.2199% ( 103) 00:15:06.971 3.010 - 3.022: 21.7780% ( 1903) 00:15:06.971 3.022 - 3.034: 51.1922% ( 2862) 00:15:06.971 3.034 - 3.058: 65.8068% ( 1422) 00:15:06.971 3.058 - 3.081: 86.0843% ( 1973) 00:15:06.971 3.081 - 3.105: 93.5971% ( 731) 00:15:06.971 3.105 - 3.129: 96.9168% ( 323) 00:15:06.971 3.129 - 3.1[2024-07-24 19:07:12.251409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.971 53: 97.7698% ( 83) 00:15:06.971 3.153 - 3.176: 97.9856% ( 21) 00:15:06.971 3.176 - 3.200: 98.0678% ( 8) 00:15:06.971 3.200 - 3.224: 98.2117% ( 14) 00:15:06.971 3.224 - 3.247: 98.3659% ( 15) 00:15:06.971 3.247 - 3.271: 98.4378% ( 7) 00:15:06.971 3.295 - 3.319: 98.4892% ( 5) 00:15:06.971 3.319 - 3.342: 98.5303% ( 4) 00:15:06.971 3.342 - 3.366: 98.5612% ( 3) 00:15:06.971 3.366 - 3.390: 98.5714% ( 1) 00:15:06.971 3.413 - 3.437: 98.5817% ( 1) 00:15:06.971 3.484 - 3.508: 98.5920% ( 1) 00:15:06.971 3.508 - 3.532: 98.6023% ( 1) 00:15:06.971 3.556 - 3.579: 98.6125% ( 1) 00:15:06.971 3.721 - 3.745: 98.6228% ( 1) 00:15:06.971 3.816 - 3.840: 98.6331% ( 1) 00:15:06.971 3.935 - 3.959: 98.6434% ( 1) 00:15:06.971 3.982 - 4.006: 98.6742% ( 3) 00:15:06.971 4.030 - 4.053: 98.6845% ( 1) 00:15:06.971 4.053 - 4.077: 98.7256% ( 4) 00:15:06.971 4.077 - 4.101: 98.7359% ( 1) 00:15:06.971 4.101 - 4.124: 98.7461% ( 1) 00:15:06.971 4.124 - 4.148: 98.7770% ( 3) 00:15:06.971 4.148 - 4.172: 98.7873% ( 1) 00:15:06.971 4.172 - 4.196: 98.8181% ( 3) 00:15:06.971 4.196 - 4.219: 98.8386% ( 2) 00:15:06.971 4.243 - 4.267: 98.8489% ( 1) 00:15:06.971 4.267 - 4.290: 98.8695% ( 2) 00:15:06.971 4.290 - 4.314: 98.8798% ( 1) 00:15:06.971 4.338 - 4.361: 98.8900% ( 1) 00:15:06.971 4.361 - 4.385: 98.9003% ( 1) 00:15:06.971 4.456 - 4.480: 98.9106% ( 1) 00:15:06.971 4.954 - 4.978: 98.9209% ( 1) 00:15:06.971 5.167 - 5.191: 98.9311% ( 1) 00:15:06.971 5.239 - 5.262: 98.9414% ( 1) 00:15:06.971 5.357 - 5.381: 98.9517% ( 1) 00:15:06.971 5.476 - 5.499: 98.9620% ( 1) 00:15:06.971 6.542 - 6.590: 98.9825% ( 2) 00:15:06.971 7.111 - 7.159: 98.9928% ( 1) 00:15:06.971 7.253 - 7.301: 99.0031% ( 1) 00:15:06.971 7.775 - 7.822: 99.0134% ( 1) 00:15:06.971 7.917 - 7.964: 99.0236% ( 1) 00:15:06.971 8.012 - 8.059: 99.0545% ( 3) 00:15:06.971 8.249 - 8.296: 99.0647% ( 1) 00:15:06.971 8.344 - 8.391: 99.0750% ( 1) 00:15:06.971 8.486 - 8.533: 99.0853% ( 1) 00:15:06.972 8.533 - 8.581: 99.0956% ( 1) 00:15:06.972 8.628 - 8.676: 99.1059% ( 1) 00:15:06.972 9.197 - 9.244: 99.1161% ( 1) 00:15:06.972 9.339 - 9.387: 99.1264% ( 1) 00:15:06.972 9.624 - 9.671: 99.1470% ( 2) 00:15:06.972 9.766 - 9.813: 99.1572% ( 1) 00:15:06.972 12.421 - 12.516: 99.1675% ( 1) 00:15:06.972 12.705 - 12.800: 99.1778% ( 1) 00:15:06.972 15.739 - 15.834: 99.1881% ( 1) 00:15:06.972 19.532 - 19.627: 99.1984% ( 1) 00:15:06.972 35.461 - 35.650: 99.2086% ( 1) 00:15:06.972 2961.256 - 2973.393: 99.2189% ( 1) 00:15:06.972 3980.705 - 4004.978: 99.8767% ( 64) 00:15:06.972 4004.978 - 4029.250: 100.0000% ( 12) 00:15:06.972 00:15:06.972 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:06.972 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:06.972 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:06.972 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:06.972 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:06.972 [ 00:15:06.972 { 00:15:06.972 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:06.972 "subtype": "Discovery", 00:15:06.972 "listen_addresses": [], 00:15:06.972 "allow_any_host": true, 00:15:06.972 "hosts": [] 00:15:06.972 }, 00:15:06.972 { 00:15:06.972 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:06.972 "subtype": "NVMe", 00:15:06.972 "listen_addresses": [ 00:15:06.972 { 00:15:06.972 "trtype": "VFIOUSER", 00:15:06.972 "adrfam": "IPv4", 00:15:06.972 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:06.972 "trsvcid": "0" 00:15:06.972 } 00:15:06.972 ], 00:15:06.972 "allow_any_host": true, 00:15:06.972 "hosts": [], 00:15:06.972 "serial_number": "SPDK1", 00:15:06.972 "model_number": "SPDK bdev Controller", 00:15:06.972 "max_namespaces": 32, 00:15:06.972 "min_cntlid": 1, 00:15:06.972 "max_cntlid": 65519, 00:15:06.972 "namespaces": [ 00:15:06.972 { 00:15:06.972 "nsid": 1, 00:15:06.972 "bdev_name": "Malloc1", 00:15:06.972 "name": "Malloc1", 00:15:06.972 "nguid": "46E893650AEA40C3AC4EF3C80FE05D7E", 00:15:06.972 "uuid": "46e89365-0aea-40c3-ac4e-f3c80fe05d7e" 00:15:06.972 }, 00:15:06.972 { 00:15:06.972 "nsid": 2, 00:15:06.972 "bdev_name": "Malloc3", 00:15:06.972 "name": "Malloc3", 00:15:06.972 "nguid": "D73A49A5D1DF4C1AA4D6C284353BC7D5", 00:15:06.972 "uuid": "d73a49a5-d1df-4c1a-a4d6-c284353bc7d5" 00:15:06.972 } 00:15:06.972 ] 00:15:06.972 }, 00:15:06.972 { 00:15:06.972 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:06.972 "subtype": "NVMe", 00:15:06.972 "listen_addresses": [ 00:15:06.972 { 00:15:06.972 "trtype": "VFIOUSER", 00:15:06.972 "adrfam": "IPv4", 00:15:06.972 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:06.972 "trsvcid": "0" 00:15:06.972 } 00:15:06.972 ], 00:15:06.972 "allow_any_host": true, 00:15:06.972 "hosts": [], 00:15:06.972 "serial_number": "SPDK2", 00:15:06.972 "model_number": "SPDK bdev Controller", 00:15:06.972 "max_namespaces": 32, 00:15:06.972 "min_cntlid": 1, 00:15:06.972 "max_cntlid": 65519, 00:15:06.972 "namespaces": [ 00:15:06.972 { 00:15:06.972 "nsid": 1, 00:15:06.972 "bdev_name": "Malloc2", 00:15:06.972 "name": "Malloc2", 00:15:06.972 "nguid": "E8ECADE2600F420B85244494E8513CED", 00:15:06.972 "uuid": "e8ecade2-600f-420b-8524-4494e8513ced" 00:15:06.972 } 00:15:06.972 ] 00:15:06.972 } 00:15:06.972 ] 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1637194 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:07.230 19:07:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:07.230 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.230 [2024-07-24 19:07:12.874317] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.488 Malloc4 00:15:07.488 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:07.747 [2024-07-24 19:07:13.423390] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.747 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.004 Asynchronous Event Request test 00:15:08.004 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.004 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:08.004 Registering asynchronous event callbacks... 00:15:08.004 Starting namespace attribute notice tests for all controllers... 00:15:08.004 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:08.004 aer_cb - Changed Namespace 00:15:08.004 Cleaning up... 00:15:08.261 [ 00:15:08.261 { 00:15:08.261 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.261 "subtype": "Discovery", 00:15:08.261 "listen_addresses": [], 00:15:08.261 "allow_any_host": true, 00:15:08.261 "hosts": [] 00:15:08.261 }, 00:15:08.261 { 00:15:08.261 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.261 "subtype": "NVMe", 00:15:08.261 "listen_addresses": [ 00:15:08.261 { 00:15:08.261 "trtype": "VFIOUSER", 00:15:08.261 "adrfam": "IPv4", 00:15:08.261 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.261 "trsvcid": "0" 00:15:08.261 } 00:15:08.261 ], 00:15:08.261 "allow_any_host": true, 00:15:08.261 "hosts": [], 00:15:08.261 "serial_number": "SPDK1", 00:15:08.261 "model_number": "SPDK bdev Controller", 00:15:08.261 "max_namespaces": 32, 00:15:08.261 "min_cntlid": 1, 00:15:08.261 "max_cntlid": 65519, 00:15:08.261 "namespaces": [ 00:15:08.261 { 00:15:08.261 "nsid": 1, 00:15:08.261 "bdev_name": "Malloc1", 00:15:08.261 "name": "Malloc1", 00:15:08.261 "nguid": "46E893650AEA40C3AC4EF3C80FE05D7E", 00:15:08.261 "uuid": "46e89365-0aea-40c3-ac4e-f3c80fe05d7e" 00:15:08.261 }, 00:15:08.261 { 00:15:08.261 "nsid": 2, 00:15:08.261 "bdev_name": "Malloc3", 00:15:08.261 "name": "Malloc3", 00:15:08.261 "nguid": "D73A49A5D1DF4C1AA4D6C284353BC7D5", 00:15:08.262 "uuid": "d73a49a5-d1df-4c1a-a4d6-c284353bc7d5" 00:15:08.262 } 00:15:08.262 ] 00:15:08.262 }, 00:15:08.262 { 00:15:08.262 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.262 "subtype": "NVMe", 00:15:08.262 "listen_addresses": [ 00:15:08.262 { 00:15:08.262 "trtype": "VFIOUSER", 00:15:08.262 "adrfam": "IPv4", 00:15:08.262 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.262 "trsvcid": "0" 00:15:08.262 } 00:15:08.262 ], 00:15:08.262 "allow_any_host": true, 00:15:08.262 "hosts": [], 00:15:08.262 "serial_number": "SPDK2", 00:15:08.262 "model_number": "SPDK bdev Controller", 00:15:08.262 "max_namespaces": 32, 00:15:08.262 "min_cntlid": 1, 00:15:08.262 "max_cntlid": 65519, 00:15:08.262 "namespaces": [ 00:15:08.262 { 00:15:08.262 "nsid": 1, 00:15:08.262 "bdev_name": "Malloc2", 00:15:08.262 "name": "Malloc2", 00:15:08.262 "nguid": "E8ECADE2600F420B85244494E8513CED", 00:15:08.262 "uuid": "e8ecade2-600f-420b-8524-4494e8513ced" 00:15:08.262 }, 00:15:08.262 { 00:15:08.262 "nsid": 2, 00:15:08.262 "bdev_name": "Malloc4", 00:15:08.262 "name": "Malloc4", 00:15:08.262 "nguid": "92B2145814CC4FF4BCC8C5124BBB517E", 00:15:08.262 "uuid": "92b21458-14cc-4ff4-bcc8-c5124bbb517e" 00:15:08.262 } 00:15:08.262 ] 00:15:08.262 } 00:15:08.262 ] 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1637194 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1631335 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1631335 ']' 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1631335 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1631335 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1631335' 00:15:08.262 killing process with pid 1631335 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1631335 00:15:08.262 19:07:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1631335 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1637458 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1637458' 00:15:08.829 Process pid: 1637458 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1637458 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1637458 ']' 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.829 19:07:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:08.829 [2024-07-24 19:07:14.419174] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:08.829 [2024-07-24 19:07:14.420447] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:15:08.829 [2024-07-24 19:07:14.420517] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.829 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.829 [2024-07-24 19:07:14.521944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.088 [2024-07-24 19:07:14.720349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.088 [2024-07-24 19:07:14.720475] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.088 [2024-07-24 19:07:14.720514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:09.088 [2024-07-24 19:07:14.720544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:09.088 [2024-07-24 19:07:14.720571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.088 [2024-07-24 19:07:14.720741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.088 [2024-07-24 19:07:14.720805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.088 [2024-07-24 19:07:14.720867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.088 [2024-07-24 19:07:14.720871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.355 [2024-07-24 19:07:14.886520] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:09.355 [2024-07-24 19:07:14.886786] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:09.355 [2024-07-24 19:07:14.887127] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:09.355 [2024-07-24 19:07:14.887794] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:09.355 [2024-07-24 19:07:14.888117] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:09.933 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.933 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:09.933 19:07:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:10.864 19:07:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:11.430 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:11.430 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:11.430 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:11.430 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:11.430 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:11.996 Malloc1 00:15:11.996 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:12.254 19:07:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:12.511 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:13.076 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.076 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:13.076 19:07:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:13.334 Malloc2 00:15:13.334 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:14.266 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:14.524 19:07:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1637458 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1637458 ']' 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1637458 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1637458 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1637458' 00:15:14.781 killing process with pid 1637458 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1637458 00:15:14.781 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1637458 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:15.349 00:15:15.349 real 0m59.430s 00:15:15.349 user 3m53.792s 00:15:15.349 sys 0m6.400s 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:15.349 ************************************ 00:15:15.349 END TEST nvmf_vfio_user 00:15:15.349 ************************************ 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:15.349 ************************************ 00:15:15.349 START TEST nvmf_vfio_user_nvme_compliance 00:15:15.349 ************************************ 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:15.349 * Looking for test storage... 00:15:15.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:15.349 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1638236 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1638236' 00:15:15.350 Process pid: 1638236 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1638236 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1638236 ']' 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.350 19:07:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:15.609 [2024-07-24 19:07:21.050392] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:15:15.609 [2024-07-24 19:07:21.050528] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.609 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.609 [2024-07-24 19:07:21.136335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:15.609 [2024-07-24 19:07:21.283883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.609 [2024-07-24 19:07:21.283965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.609 [2024-07-24 19:07:21.283986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.609 [2024-07-24 19:07:21.284003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.609 [2024-07-24 19:07:21.284018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.609 [2024-07-24 19:07:21.284111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.609 [2024-07-24 19:07:21.284184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.609 [2024-07-24 19:07:21.284187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.867 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.867 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:15.867 19:07:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.801 malloc0 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.801 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.060 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.060 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:17.060 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.060 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.060 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.060 19:07:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:17.060 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.060 00:15:17.060 00:15:17.060 CUnit - A unit testing framework for C - Version 2.1-3 00:15:17.060 http://cunit.sourceforge.net/ 00:15:17.060 00:15:17.060 00:15:17.060 Suite: nvme_compliance 00:15:17.318 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 19:07:22.768204] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.318 [2024-07-24 19:07:22.769859] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:17.318 [2024-07-24 19:07:22.769902] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:17.318 [2024-07-24 19:07:22.769921] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:17.318 [2024-07-24 19:07:22.771220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.318 passed 00:15:17.318 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 19:07:22.874076] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.318 [2024-07-24 19:07:22.880110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.318 passed 00:15:17.318 Test: admin_identify_ns ...[2024-07-24 19:07:22.985223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.577 [2024-07-24 19:07:23.043605] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:17.577 [2024-07-24 19:07:23.052454] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:17.577 [2024-07-24 19:07:23.073628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.577 passed 00:15:17.577 Test: admin_get_features_mandatory_features ...[2024-07-24 19:07:23.175463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.577 [2024-07-24 19:07:23.178486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.577 passed 00:15:17.835 Test: admin_get_features_optional_features ...[2024-07-24 19:07:23.282275] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.835 [2024-07-24 19:07:23.285306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.835 passed 00:15:17.835 Test: admin_set_features_number_of_queues ...[2024-07-24 19:07:23.387529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.835 [2024-07-24 19:07:23.494571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.093 passed 00:15:18.093 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 19:07:23.595531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.093 [2024-07-24 19:07:23.598565] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.093 passed 00:15:18.093 Test: admin_get_log_page_with_lpo ...[2024-07-24 19:07:23.701216] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.093 [2024-07-24 19:07:23.774449] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:18.093 [2024-07-24 19:07:23.787590] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.350 passed 00:15:18.350 Test: fabric_property_get ...[2024-07-24 19:07:23.891379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.350 [2024-07-24 19:07:23.892793] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:18.350 [2024-07-24 19:07:23.894410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.350 passed 00:15:18.350 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 19:07:23.998188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.350 [2024-07-24 19:07:23.999605] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:18.350 [2024-07-24 19:07:24.001212] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.350 passed 00:15:18.608 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 19:07:24.104293] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.608 [2024-07-24 19:07:24.189441] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:18.608 [2024-07-24 19:07:24.205444] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:18.608 [2024-07-24 19:07:24.210573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.608 passed 00:15:18.866 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 19:07:24.312504] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.866 [2024-07-24 19:07:24.313962] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:18.866 [2024-07-24 19:07:24.315551] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.866 passed 00:15:18.866 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 19:07:24.418288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.866 [2024-07-24 19:07:24.495449] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:18.866 [2024-07-24 19:07:24.519447] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:18.866 [2024-07-24 19:07:24.524595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.124 passed 00:15:19.124 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 19:07:24.625597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.124 [2024-07-24 19:07:24.627007] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:19.124 [2024-07-24 19:07:24.627059] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:19.124 [2024-07-24 19:07:24.628626] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.124 passed 00:15:19.124 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 19:07:24.729744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.381 [2024-07-24 19:07:24.821464] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:19.381 [2024-07-24 19:07:24.829450] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:19.382 [2024-07-24 19:07:24.837445] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:19.382 [2024-07-24 19:07:24.845446] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:19.382 [2024-07-24 19:07:24.874587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.382 passed 00:15:19.382 Test: admin_create_io_sq_verify_pc ...[2024-07-24 19:07:24.978554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.382 [2024-07-24 19:07:24.996464] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:19.382 [2024-07-24 19:07:25.014393] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.382 passed 00:15:19.639 Test: admin_create_io_qp_max_qps ...[2024-07-24 19:07:25.115176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:20.573 [2024-07-24 19:07:26.203452] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:21.140 [2024-07-24 19:07:26.581474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.140 passed 00:15:21.140 Test: admin_create_io_sq_shared_cq ...[2024-07-24 19:07:26.683223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.140 [2024-07-24 19:07:26.814444] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:21.398 [2024-07-24 19:07:26.851563] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.398 passed 00:15:21.398 00:15:21.398 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.398 suites 1 1 n/a 0 0 00:15:21.398 tests 18 18 18 0 0 00:15:21.398 asserts 360 360 360 0 n/a 00:15:21.398 00:15:21.398 Elapsed time = 1.733 seconds 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1638236 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1638236 ']' 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1638236 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1638236 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1638236' 00:15:21.398 killing process with pid 1638236 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1638236 00:15:21.398 19:07:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1638236 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:21.967 00:15:21.967 real 0m6.476s 00:15:21.967 user 0m17.864s 00:15:21.967 sys 0m0.683s 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.967 ************************************ 00:15:21.967 END TEST nvmf_vfio_user_nvme_compliance 00:15:21.967 ************************************ 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.967 ************************************ 00:15:21.967 START TEST nvmf_vfio_user_fuzz 00:15:21.967 ************************************ 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:21.967 * Looking for test storage... 00:15:21.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.967 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1639045 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1639045' 00:15:21.968 Process pid: 1639045 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1639045 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1639045 ']' 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.968 19:07:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.346 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.346 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:23.346 19:07:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.282 malloc0 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:24.282 19:07:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:56.411 Fuzzing completed. Shutting down the fuzz application 00:15:56.411 00:15:56.411 Dumping successful admin opcodes: 00:15:56.411 8, 9, 10, 24, 00:15:56.411 Dumping successful io opcodes: 00:15:56.411 0, 00:15:56.411 NS: 0x200003a1ef00 I/O qp, Total commands completed: 457755, total successful commands: 1777, random_seed: 1817365248 00:15:56.411 NS: 0x200003a1ef00 admin qp, Total commands completed: 88174, total successful commands: 706, random_seed: 1896817920 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1639045 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1639045 ']' 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1639045 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1639045 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1639045' 00:15:56.411 killing process with pid 1639045 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1639045 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1639045 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:56.411 00:15:56.411 real 0m33.543s 00:15:56.411 user 0m33.358s 00:15:56.411 sys 0m27.628s 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.411 19:08:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.411 ************************************ 00:15:56.411 END TEST nvmf_vfio_user_fuzz 00:15:56.411 ************************************ 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.411 ************************************ 00:15:56.411 START TEST nvmf_auth_target 00:15:56.411 ************************************ 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:56.411 * Looking for test storage... 00:15:56.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.411 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:56.412 19:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:58.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:58.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:58.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:58.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:58.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:58.321 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:58.322 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:58.322 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:58.322 Found net devices under 0000:84:00.0: cvl_0_0 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:58.322 Found net devices under 0000:84:00.1: cvl_0_1 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:58.322 19:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:58.322 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:58.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:58.582 00:15:58.582 --- 10.0.0.2 ping statistics --- 00:15:58.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.582 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:58.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:15:58.582 00:15:58.582 --- 10.0.0.1 ping statistics --- 00:15:58.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.582 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1644627 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1644627 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1644627 ']' 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.582 19:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1644787 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0fcc8acaeeccdb664fb61d9931f5eda3e2a76ca2b30b2756 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.M0I 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0fcc8acaeeccdb664fb61d9931f5eda3e2a76ca2b30b2756 0 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0fcc8acaeeccdb664fb61d9931f5eda3e2a76ca2b30b2756 0 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0fcc8acaeeccdb664fb61d9931f5eda3e2a76ca2b30b2756 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.M0I 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.M0I 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.M0I 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5b1f51b8efb090140fd33700971e30f4033735caec8bce67e9847e6a70e9dc49 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.XzI 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5b1f51b8efb090140fd33700971e30f4033735caec8bce67e9847e6a70e9dc49 3 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5b1f51b8efb090140fd33700971e30f4033735caec8bce67e9847e6a70e9dc49 3 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5b1f51b8efb090140fd33700971e30f4033735caec8bce67e9847e6a70e9dc49 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.XzI 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.XzI 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.XzI 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=17eb597fb268769785b75f72fc4ce7b6 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.W1Y 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 17eb597fb268769785b75f72fc4ce7b6 1 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 17eb597fb268769785b75f72fc4ce7b6 1 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=17eb597fb268769785b75f72fc4ce7b6 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.W1Y 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.W1Y 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.W1Y 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=98415017ae1dcfa76cc5a25a868a6bd25566dc3937ba86d9 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FgD 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 98415017ae1dcfa76cc5a25a868a6bd25566dc3937ba86d9 2 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 98415017ae1dcfa76cc5a25a868a6bd25566dc3937ba86d9 2 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=98415017ae1dcfa76cc5a25a868a6bd25566dc3937ba86d9 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FgD 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FgD 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.FgD 00:15:59.960 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=40ca83438614403f4dc99950c4706fdc1f7b1d17e3dfd12b 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Y1w 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 40ca83438614403f4dc99950c4706fdc1f7b1d17e3dfd12b 2 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 40ca83438614403f4dc99950c4706fdc1f7b1d17e3dfd12b 2 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=40ca83438614403f4dc99950c4706fdc1f7b1d17e3dfd12b 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Y1w 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Y1w 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Y1w 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=72b0d0aca20f11bed6d7f883a4cc7fb5 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.nWM 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 72b0d0aca20f11bed6d7f883a4cc7fb5 1 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 72b0d0aca20f11bed6d7f883a4cc7fb5 1 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=72b0d0aca20f11bed6d7f883a4cc7fb5 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:59.961 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.nWM 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.nWM 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.nWM 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2c8aa528ee4e074dd7d6a690411095b06e52d159065c2bf100a4a84a2996fb62 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JwN 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2c8aa528ee4e074dd7d6a690411095b06e52d159065c2bf100a4a84a2996fb62 3 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2c8aa528ee4e074dd7d6a690411095b06e52d159065c2bf100a4a84a2996fb62 3 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2c8aa528ee4e074dd7d6a690411095b06e52d159065c2bf100a4a84a2996fb62 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JwN 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JwN 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.JwN 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1644627 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1644627 ']' 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.220 19:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1644787 /var/tmp/host.sock 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1644787 ']' 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:00.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.478 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.M0I 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.M0I 00:16:01.045 19:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.M0I 00:16:01.613 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.XzI ]] 00:16:01.613 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XzI 00:16:01.613 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.613 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.613 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.613 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XzI 00:16:01.613 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.XzI 00:16:01.872 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:01.872 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.W1Y 00:16:01.872 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.872 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.872 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.873 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.W1Y 00:16:01.873 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.W1Y 00:16:02.131 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.FgD ]] 00:16:02.131 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FgD 00:16:02.131 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.131 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.131 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.131 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FgD 00:16:02.131 19:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FgD 00:16:02.696 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:02.696 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Y1w 00:16:02.696 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.696 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.696 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.696 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Y1w 00:16:02.696 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Y1w 00:16:02.954 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.nWM ]] 00:16:02.954 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nWM 00:16:02.954 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.954 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.954 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.954 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nWM 00:16:02.954 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nWM 00:16:03.519 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:03.519 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.JwN 00:16:03.519 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.519 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.519 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.519 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.JwN 00:16:03.519 19:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.JwN 00:16:04.083 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:04.083 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:04.083 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.083 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.083 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.083 19:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.648 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.906 00:16:04.906 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.906 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.906 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.164 { 00:16:05.164 "cntlid": 1, 00:16:05.164 "qid": 0, 00:16:05.164 "state": "enabled", 00:16:05.164 "thread": "nvmf_tgt_poll_group_000", 00:16:05.164 "listen_address": { 00:16:05.164 "trtype": "TCP", 00:16:05.164 "adrfam": "IPv4", 00:16:05.164 "traddr": "10.0.0.2", 00:16:05.164 "trsvcid": "4420" 00:16:05.164 }, 00:16:05.164 "peer_address": { 00:16:05.164 "trtype": "TCP", 00:16:05.164 "adrfam": "IPv4", 00:16:05.164 "traddr": "10.0.0.1", 00:16:05.164 "trsvcid": "55378" 00:16:05.164 }, 00:16:05.164 "auth": { 00:16:05.164 "state": "completed", 00:16:05.164 "digest": "sha256", 00:16:05.164 "dhgroup": "null" 00:16:05.164 } 00:16:05.164 } 00:16:05.164 ]' 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.164 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.421 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:05.421 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.421 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.421 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.421 19:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.679 19:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.052 19:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.640 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.906 00:16:07.906 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.906 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.906 19:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.472 { 00:16:08.472 "cntlid": 3, 00:16:08.472 "qid": 0, 00:16:08.472 "state": "enabled", 00:16:08.472 "thread": "nvmf_tgt_poll_group_000", 00:16:08.472 "listen_address": { 00:16:08.472 "trtype": "TCP", 00:16:08.472 "adrfam": "IPv4", 00:16:08.472 "traddr": "10.0.0.2", 00:16:08.472 "trsvcid": "4420" 00:16:08.472 }, 00:16:08.472 "peer_address": { 00:16:08.472 "trtype": "TCP", 00:16:08.472 "adrfam": "IPv4", 00:16:08.472 "traddr": "10.0.0.1", 00:16:08.472 "trsvcid": "55398" 00:16:08.472 }, 00:16:08.472 "auth": { 00:16:08.472 "state": "completed", 00:16:08.472 "digest": "sha256", 00:16:08.472 "dhgroup": "null" 00:16:08.472 } 00:16:08.472 } 00:16:08.472 ]' 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.472 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.731 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:08.731 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.731 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.731 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.731 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.989 19:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.363 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.929 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.930 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.930 19:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.496 00:16:11.496 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.496 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.496 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.754 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.754 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.754 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.754 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.754 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.754 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.754 { 00:16:11.754 "cntlid": 5, 00:16:11.754 "qid": 0, 00:16:11.754 "state": "enabled", 00:16:11.754 "thread": "nvmf_tgt_poll_group_000", 00:16:11.754 "listen_address": { 00:16:11.754 "trtype": "TCP", 00:16:11.754 "adrfam": "IPv4", 00:16:11.754 "traddr": "10.0.0.2", 00:16:11.754 "trsvcid": "4420" 00:16:11.754 }, 00:16:11.754 "peer_address": { 00:16:11.754 "trtype": "TCP", 00:16:11.754 "adrfam": "IPv4", 00:16:11.755 "traddr": "10.0.0.1", 00:16:11.755 "trsvcid": "55424" 00:16:11.755 }, 00:16:11.755 "auth": { 00:16:11.755 "state": "completed", 00:16:11.755 "digest": "sha256", 00:16:11.755 "dhgroup": "null" 00:16:11.755 } 00:16:11.755 } 00:16:11.755 ]' 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.012 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.270 19:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:16:13.644 19:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.644 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:13.644 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.644 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.644 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.644 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.644 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:13.644 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.211 19:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.469 00:16:14.469 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.469 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.469 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.036 { 00:16:15.036 "cntlid": 7, 00:16:15.036 "qid": 0, 00:16:15.036 "state": "enabled", 00:16:15.036 "thread": "nvmf_tgt_poll_group_000", 00:16:15.036 "listen_address": { 00:16:15.036 "trtype": "TCP", 00:16:15.036 "adrfam": "IPv4", 00:16:15.036 "traddr": "10.0.0.2", 00:16:15.036 "trsvcid": "4420" 00:16:15.036 }, 00:16:15.036 "peer_address": { 00:16:15.036 "trtype": "TCP", 00:16:15.036 "adrfam": "IPv4", 00:16:15.036 "traddr": "10.0.0.1", 00:16:15.036 "trsvcid": "44346" 00:16:15.036 }, 00:16:15.036 "auth": { 00:16:15.036 "state": "completed", 00:16:15.036 "digest": "sha256", 00:16:15.036 "dhgroup": "null" 00:16:15.036 } 00:16:15.036 } 00:16:15.036 ]' 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.036 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.294 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:15.294 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.294 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.294 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.294 19:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.860 19:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.231 19:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.796 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.054 00:16:18.054 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.054 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.054 19:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.619 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.619 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.619 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.619 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.619 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.619 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.619 { 00:16:18.619 "cntlid": 9, 00:16:18.619 "qid": 0, 00:16:18.619 "state": "enabled", 00:16:18.619 "thread": "nvmf_tgt_poll_group_000", 00:16:18.619 "listen_address": { 00:16:18.619 "trtype": "TCP", 00:16:18.619 "adrfam": "IPv4", 00:16:18.619 "traddr": "10.0.0.2", 00:16:18.619 "trsvcid": "4420" 00:16:18.619 }, 00:16:18.619 "peer_address": { 00:16:18.619 "trtype": "TCP", 00:16:18.619 "adrfam": "IPv4", 00:16:18.620 "traddr": "10.0.0.1", 00:16:18.620 "trsvcid": "44374" 00:16:18.620 }, 00:16:18.620 "auth": { 00:16:18.620 "state": "completed", 00:16:18.620 "digest": "sha256", 00:16:18.620 "dhgroup": "ffdhe2048" 00:16:18.620 } 00:16:18.620 } 00:16:18.620 ]' 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.620 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.877 19:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.249 19:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.815 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.380 00:16:21.380 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.380 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.380 19:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.946 { 00:16:21.946 "cntlid": 11, 00:16:21.946 "qid": 0, 00:16:21.946 "state": "enabled", 00:16:21.946 "thread": "nvmf_tgt_poll_group_000", 00:16:21.946 "listen_address": { 00:16:21.946 "trtype": "TCP", 00:16:21.946 "adrfam": "IPv4", 00:16:21.946 "traddr": "10.0.0.2", 00:16:21.946 "trsvcid": "4420" 00:16:21.946 }, 00:16:21.946 "peer_address": { 00:16:21.946 "trtype": "TCP", 00:16:21.946 "adrfam": "IPv4", 00:16:21.946 "traddr": "10.0.0.1", 00:16:21.946 "trsvcid": "44388" 00:16:21.946 }, 00:16:21.946 "auth": { 00:16:21.946 "state": "completed", 00:16:21.946 "digest": "sha256", 00:16:21.946 "dhgroup": "ffdhe2048" 00:16:21.946 } 00:16:21.946 } 00:16:21.946 ]' 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.946 19:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.512 19:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:16:23.918 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.918 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:23.919 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.919 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.919 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.919 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.919 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.919 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.185 19:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.753 00:16:25.011 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.011 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.011 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.269 { 00:16:25.269 "cntlid": 13, 00:16:25.269 "qid": 0, 00:16:25.269 "state": "enabled", 00:16:25.269 "thread": "nvmf_tgt_poll_group_000", 00:16:25.269 "listen_address": { 00:16:25.269 "trtype": "TCP", 00:16:25.269 "adrfam": "IPv4", 00:16:25.269 "traddr": "10.0.0.2", 00:16:25.269 "trsvcid": "4420" 00:16:25.269 }, 00:16:25.269 "peer_address": { 00:16:25.269 "trtype": "TCP", 00:16:25.269 "adrfam": "IPv4", 00:16:25.269 "traddr": "10.0.0.1", 00:16:25.269 "trsvcid": "51754" 00:16:25.269 }, 00:16:25.269 "auth": { 00:16:25.269 "state": "completed", 00:16:25.269 "digest": "sha256", 00:16:25.269 "dhgroup": "ffdhe2048" 00:16:25.269 } 00:16:25.269 } 00:16:25.269 ]' 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.269 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.528 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.528 19:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.528 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.528 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.528 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.786 19:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.160 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.419 19:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.984 00:16:27.984 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.984 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.984 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.550 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.550 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.550 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.550 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.550 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.550 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.550 { 00:16:28.550 "cntlid": 15, 00:16:28.550 "qid": 0, 00:16:28.550 "state": "enabled", 00:16:28.550 "thread": "nvmf_tgt_poll_group_000", 00:16:28.550 "listen_address": { 00:16:28.550 "trtype": "TCP", 00:16:28.550 "adrfam": "IPv4", 00:16:28.550 "traddr": "10.0.0.2", 00:16:28.550 "trsvcid": "4420" 00:16:28.550 }, 00:16:28.550 "peer_address": { 00:16:28.550 "trtype": "TCP", 00:16:28.550 "adrfam": "IPv4", 00:16:28.550 "traddr": "10.0.0.1", 00:16:28.550 "trsvcid": "51776" 00:16:28.550 }, 00:16:28.550 "auth": { 00:16:28.550 "state": "completed", 00:16:28.550 "digest": "sha256", 00:16:28.550 "dhgroup": "ffdhe2048" 00:16:28.550 } 00:16:28.550 } 00:16:28.550 ]' 00:16:28.550 19:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.550 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.550 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.550 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.550 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.550 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.550 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.550 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.808 19:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.709 19:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.709 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.275 00:16:31.276 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.276 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.276 19:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.842 { 00:16:31.842 "cntlid": 17, 00:16:31.842 "qid": 0, 00:16:31.842 "state": "enabled", 00:16:31.842 "thread": "nvmf_tgt_poll_group_000", 00:16:31.842 "listen_address": { 00:16:31.842 "trtype": "TCP", 00:16:31.842 "adrfam": "IPv4", 00:16:31.842 "traddr": "10.0.0.2", 00:16:31.842 "trsvcid": "4420" 00:16:31.842 }, 00:16:31.842 "peer_address": { 00:16:31.842 "trtype": "TCP", 00:16:31.842 "adrfam": "IPv4", 00:16:31.842 "traddr": "10.0.0.1", 00:16:31.842 "trsvcid": "51804" 00:16:31.842 }, 00:16:31.842 "auth": { 00:16:31.842 "state": "completed", 00:16:31.842 "digest": "sha256", 00:16:31.842 "dhgroup": "ffdhe3072" 00:16:31.842 } 00:16:31.842 } 00:16:31.842 ]' 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.842 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.101 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.101 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.101 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.359 19:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.734 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.300 19:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.866 00:16:34.866 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.866 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.866 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.125 { 00:16:35.125 "cntlid": 19, 00:16:35.125 "qid": 0, 00:16:35.125 "state": "enabled", 00:16:35.125 "thread": "nvmf_tgt_poll_group_000", 00:16:35.125 "listen_address": { 00:16:35.125 "trtype": "TCP", 00:16:35.125 "adrfam": "IPv4", 00:16:35.125 "traddr": "10.0.0.2", 00:16:35.125 "trsvcid": "4420" 00:16:35.125 }, 00:16:35.125 "peer_address": { 00:16:35.125 "trtype": "TCP", 00:16:35.125 "adrfam": "IPv4", 00:16:35.125 "traddr": "10.0.0.1", 00:16:35.125 "trsvcid": "37508" 00:16:35.125 }, 00:16:35.125 "auth": { 00:16:35.125 "state": "completed", 00:16:35.125 "digest": "sha256", 00:16:35.125 "dhgroup": "ffdhe3072" 00:16:35.125 } 00:16:35.125 } 00:16:35.125 ]' 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.125 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.383 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:35.383 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.383 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.383 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.383 19:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.641 19:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.016 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.274 19:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.841 00:16:37.841 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.841 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.841 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.406 { 00:16:38.406 "cntlid": 21, 00:16:38.406 "qid": 0, 00:16:38.406 "state": "enabled", 00:16:38.406 "thread": "nvmf_tgt_poll_group_000", 00:16:38.406 "listen_address": { 00:16:38.406 "trtype": "TCP", 00:16:38.406 "adrfam": "IPv4", 00:16:38.406 "traddr": "10.0.0.2", 00:16:38.406 "trsvcid": "4420" 00:16:38.406 }, 00:16:38.406 "peer_address": { 00:16:38.406 "trtype": "TCP", 00:16:38.406 "adrfam": "IPv4", 00:16:38.406 "traddr": "10.0.0.1", 00:16:38.406 "trsvcid": "37542" 00:16:38.406 }, 00:16:38.406 "auth": { 00:16:38.406 "state": "completed", 00:16:38.406 "digest": "sha256", 00:16:38.406 "dhgroup": "ffdhe3072" 00:16:38.406 } 00:16:38.406 } 00:16:38.406 ]' 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.406 19:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.406 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.406 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.406 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.406 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.406 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.972 19:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.351 19:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.918 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.485 00:16:41.485 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.485 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.485 19:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.051 { 00:16:42.051 "cntlid": 23, 00:16:42.051 "qid": 0, 00:16:42.051 "state": "enabled", 00:16:42.051 "thread": "nvmf_tgt_poll_group_000", 00:16:42.051 "listen_address": { 00:16:42.051 "trtype": "TCP", 00:16:42.051 "adrfam": "IPv4", 00:16:42.051 "traddr": "10.0.0.2", 00:16:42.051 "trsvcid": "4420" 00:16:42.051 }, 00:16:42.051 "peer_address": { 00:16:42.051 "trtype": "TCP", 00:16:42.051 "adrfam": "IPv4", 00:16:42.051 "traddr": "10.0.0.1", 00:16:42.051 "trsvcid": "37560" 00:16:42.051 }, 00:16:42.051 "auth": { 00:16:42.051 "state": "completed", 00:16:42.051 "digest": "sha256", 00:16:42.051 "dhgroup": "ffdhe3072" 00:16:42.051 } 00:16:42.051 } 00:16:42.051 ]' 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.051 19:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.617 19:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:16:43.552 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.810 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.068 19:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.634 00:16:44.634 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.634 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.634 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.892 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.892 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.892 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.892 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.892 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.892 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.892 { 00:16:44.892 "cntlid": 25, 00:16:44.892 "qid": 0, 00:16:44.892 "state": "enabled", 00:16:44.892 "thread": "nvmf_tgt_poll_group_000", 00:16:44.892 "listen_address": { 00:16:44.892 "trtype": "TCP", 00:16:44.892 "adrfam": "IPv4", 00:16:44.892 "traddr": "10.0.0.2", 00:16:44.892 "trsvcid": "4420" 00:16:44.892 }, 00:16:44.892 "peer_address": { 00:16:44.892 "trtype": "TCP", 00:16:44.892 "adrfam": "IPv4", 00:16:44.892 "traddr": "10.0.0.1", 00:16:44.892 "trsvcid": "39384" 00:16:44.892 }, 00:16:44.892 "auth": { 00:16:44.892 "state": "completed", 00:16:44.892 "digest": "sha256", 00:16:44.892 "dhgroup": "ffdhe4096" 00:16:44.892 } 00:16:44.892 } 00:16:44.892 ]' 00:16:44.892 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.150 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.150 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.151 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.151 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.151 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.151 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.151 19:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.715 19:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.089 19:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.347 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:47.347 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.347 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.347 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.347 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.347 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.348 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.348 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.348 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.605 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.605 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.605 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:47.863 00:16:48.120 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.121 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.121 19:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.686 { 00:16:48.686 "cntlid": 27, 00:16:48.686 "qid": 0, 00:16:48.686 "state": "enabled", 00:16:48.686 "thread": "nvmf_tgt_poll_group_000", 00:16:48.686 "listen_address": { 00:16:48.686 "trtype": "TCP", 00:16:48.686 "adrfam": "IPv4", 00:16:48.686 "traddr": "10.0.0.2", 00:16:48.686 "trsvcid": "4420" 00:16:48.686 }, 00:16:48.686 "peer_address": { 00:16:48.686 "trtype": "TCP", 00:16:48.686 "adrfam": "IPv4", 00:16:48.686 "traddr": "10.0.0.1", 00:16:48.686 "trsvcid": "39408" 00:16:48.686 }, 00:16:48.686 "auth": { 00:16:48.686 "state": "completed", 00:16:48.686 "digest": "sha256", 00:16:48.686 "dhgroup": "ffdhe4096" 00:16:48.686 } 00:16:48.686 } 00:16:48.686 ]' 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.686 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.253 19:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.625 19:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.884 19:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.451 00:16:51.451 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.451 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.451 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.017 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.017 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.017 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.017 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.017 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.017 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.017 { 00:16:52.017 "cntlid": 29, 00:16:52.017 "qid": 0, 00:16:52.017 "state": "enabled", 00:16:52.017 "thread": "nvmf_tgt_poll_group_000", 00:16:52.017 "listen_address": { 00:16:52.017 "trtype": "TCP", 00:16:52.017 "adrfam": "IPv4", 00:16:52.017 "traddr": "10.0.0.2", 00:16:52.017 "trsvcid": "4420" 00:16:52.017 }, 00:16:52.017 "peer_address": { 00:16:52.017 "trtype": "TCP", 00:16:52.017 "adrfam": "IPv4", 00:16:52.017 "traddr": "10.0.0.1", 00:16:52.017 "trsvcid": "39438" 00:16:52.017 }, 00:16:52.017 "auth": { 00:16:52.017 "state": "completed", 00:16:52.017 "digest": "sha256", 00:16:52.017 "dhgroup": "ffdhe4096" 00:16:52.017 } 00:16:52.018 } 00:16:52.018 ]' 00:16:52.018 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.276 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.276 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.276 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.276 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.276 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.276 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.276 19:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.842 19:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.218 19:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.477 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.044 00:16:55.044 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.044 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.044 19:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.611 { 00:16:55.611 "cntlid": 31, 00:16:55.611 "qid": 0, 00:16:55.611 "state": "enabled", 00:16:55.611 "thread": "nvmf_tgt_poll_group_000", 00:16:55.611 "listen_address": { 00:16:55.611 "trtype": "TCP", 00:16:55.611 "adrfam": "IPv4", 00:16:55.611 "traddr": "10.0.0.2", 00:16:55.611 "trsvcid": "4420" 00:16:55.611 }, 00:16:55.611 "peer_address": { 00:16:55.611 "trtype": "TCP", 00:16:55.611 "adrfam": "IPv4", 00:16:55.611 "traddr": "10.0.0.1", 00:16:55.611 "trsvcid": "39638" 00:16:55.611 }, 00:16:55.611 "auth": { 00:16:55.611 "state": "completed", 00:16:55.611 "digest": "sha256", 00:16:55.611 "dhgroup": "ffdhe4096" 00:16:55.611 } 00:16:55.611 } 00:16:55.611 ]' 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.611 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.889 19:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.270 19:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.528 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.094 00:16:58.094 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.094 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.094 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.351 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.351 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.351 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.351 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.351 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.351 19:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.351 { 00:16:58.351 "cntlid": 33, 00:16:58.351 "qid": 0, 00:16:58.351 "state": "enabled", 00:16:58.351 "thread": "nvmf_tgt_poll_group_000", 00:16:58.351 "listen_address": { 00:16:58.351 "trtype": "TCP", 00:16:58.351 "adrfam": "IPv4", 00:16:58.351 "traddr": "10.0.0.2", 00:16:58.351 "trsvcid": "4420" 00:16:58.351 }, 00:16:58.351 "peer_address": { 00:16:58.351 "trtype": "TCP", 00:16:58.351 "adrfam": "IPv4", 00:16:58.351 "traddr": "10.0.0.1", 00:16:58.351 "trsvcid": "39670" 00:16:58.351 }, 00:16:58.351 "auth": { 00:16:58.351 "state": "completed", 00:16:58.351 "digest": "sha256", 00:16:58.351 "dhgroup": "ffdhe6144" 00:16:58.351 } 00:16:58.351 } 00:16:58.351 ]' 00:16:58.351 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.351 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.351 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.609 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.609 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.609 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.609 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.609 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.867 19:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.240 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.498 19:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.063 00:17:01.063 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.063 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.063 19:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.629 { 00:17:01.629 "cntlid": 35, 00:17:01.629 "qid": 0, 00:17:01.629 "state": "enabled", 00:17:01.629 "thread": "nvmf_tgt_poll_group_000", 00:17:01.629 "listen_address": { 00:17:01.629 "trtype": "TCP", 00:17:01.629 "adrfam": "IPv4", 00:17:01.629 "traddr": "10.0.0.2", 00:17:01.629 "trsvcid": "4420" 00:17:01.629 }, 00:17:01.629 "peer_address": { 00:17:01.629 "trtype": "TCP", 00:17:01.629 "adrfam": "IPv4", 00:17:01.629 "traddr": "10.0.0.1", 00:17:01.629 "trsvcid": "39694" 00:17:01.629 }, 00:17:01.629 "auth": { 00:17:01.629 "state": "completed", 00:17:01.629 "digest": "sha256", 00:17:01.629 "dhgroup": "ffdhe6144" 00:17:01.629 } 00:17:01.629 } 00:17:01.629 ]' 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.629 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.194 19:09:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.570 19:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.570 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.505 00:17:04.505 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.505 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.505 19:09:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.763 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.763 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.763 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.763 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.763 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.763 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.763 { 00:17:04.763 "cntlid": 37, 00:17:04.763 "qid": 0, 00:17:04.763 "state": "enabled", 00:17:04.763 "thread": "nvmf_tgt_poll_group_000", 00:17:04.763 "listen_address": { 00:17:04.763 "trtype": "TCP", 00:17:04.763 "adrfam": "IPv4", 00:17:04.763 "traddr": "10.0.0.2", 00:17:04.763 "trsvcid": "4420" 00:17:04.763 }, 00:17:04.763 "peer_address": { 00:17:04.763 "trtype": "TCP", 00:17:04.763 "adrfam": "IPv4", 00:17:04.763 "traddr": "10.0.0.1", 00:17:04.764 "trsvcid": "38732" 00:17:04.764 }, 00:17:04.764 "auth": { 00:17:04.764 "state": "completed", 00:17:04.764 "digest": "sha256", 00:17:04.764 "dhgroup": "ffdhe6144" 00:17:04.764 } 00:17:04.764 } 00:17:04.764 ]' 00:17:04.764 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.764 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.764 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.022 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.022 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.022 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.022 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.022 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.286 19:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.664 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.922 19:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.862 00:17:07.862 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.862 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.862 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.124 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.124 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.124 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.124 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.124 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.124 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.124 { 00:17:08.124 "cntlid": 39, 00:17:08.124 "qid": 0, 00:17:08.124 "state": "enabled", 00:17:08.124 "thread": "nvmf_tgt_poll_group_000", 00:17:08.124 "listen_address": { 00:17:08.124 "trtype": "TCP", 00:17:08.124 "adrfam": "IPv4", 00:17:08.124 "traddr": "10.0.0.2", 00:17:08.124 "trsvcid": "4420" 00:17:08.124 }, 00:17:08.124 "peer_address": { 00:17:08.124 "trtype": "TCP", 00:17:08.124 "adrfam": "IPv4", 00:17:08.124 "traddr": "10.0.0.1", 00:17:08.124 "trsvcid": "38758" 00:17:08.124 }, 00:17:08.124 "auth": { 00:17:08.124 "state": "completed", 00:17:08.124 "digest": "sha256", 00:17:08.124 "dhgroup": "ffdhe6144" 00:17:08.124 } 00:17:08.124 } 00:17:08.124 ]' 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.383 19:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.641 19:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.016 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.275 19:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.650 00:17:11.650 19:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.650 19:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.650 19:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.908 { 00:17:11.908 "cntlid": 41, 00:17:11.908 "qid": 0, 00:17:11.908 "state": "enabled", 00:17:11.908 "thread": "nvmf_tgt_poll_group_000", 00:17:11.908 "listen_address": { 00:17:11.908 "trtype": "TCP", 00:17:11.908 "adrfam": "IPv4", 00:17:11.908 "traddr": "10.0.0.2", 00:17:11.908 "trsvcid": "4420" 00:17:11.908 }, 00:17:11.908 "peer_address": { 00:17:11.908 "trtype": "TCP", 00:17:11.908 "adrfam": "IPv4", 00:17:11.908 "traddr": "10.0.0.1", 00:17:11.908 "trsvcid": "38786" 00:17:11.908 }, 00:17:11.908 "auth": { 00:17:11.908 "state": "completed", 00:17:11.908 "digest": "sha256", 00:17:11.908 "dhgroup": "ffdhe8192" 00:17:11.908 } 00:17:11.908 } 00:17:11.908 ]' 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.908 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.496 19:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.447 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.014 19:09:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.389 00:17:15.389 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.389 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.389 19:09:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.647 { 00:17:15.647 "cntlid": 43, 00:17:15.647 "qid": 0, 00:17:15.647 "state": "enabled", 00:17:15.647 "thread": "nvmf_tgt_poll_group_000", 00:17:15.647 "listen_address": { 00:17:15.647 "trtype": "TCP", 00:17:15.647 "adrfam": "IPv4", 00:17:15.647 "traddr": "10.0.0.2", 00:17:15.647 "trsvcid": "4420" 00:17:15.647 }, 00:17:15.647 "peer_address": { 00:17:15.647 "trtype": "TCP", 00:17:15.647 "adrfam": "IPv4", 00:17:15.647 "traddr": "10.0.0.1", 00:17:15.647 "trsvcid": "37692" 00:17:15.647 }, 00:17:15.647 "auth": { 00:17:15.647 "state": "completed", 00:17:15.647 "digest": "sha256", 00:17:15.647 "dhgroup": "ffdhe8192" 00:17:15.647 } 00:17:15.647 } 00:17:15.647 ]' 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.647 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.905 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.905 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.905 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.905 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.905 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.471 19:09:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:17.843 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.101 19:09:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.475 00:17:19.475 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.475 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.475 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.041 { 00:17:20.041 "cntlid": 45, 00:17:20.041 "qid": 0, 00:17:20.041 "state": "enabled", 00:17:20.041 "thread": "nvmf_tgt_poll_group_000", 00:17:20.041 "listen_address": { 00:17:20.041 "trtype": "TCP", 00:17:20.041 "adrfam": "IPv4", 00:17:20.041 "traddr": "10.0.0.2", 00:17:20.041 "trsvcid": "4420" 00:17:20.041 }, 00:17:20.041 "peer_address": { 00:17:20.041 "trtype": "TCP", 00:17:20.041 "adrfam": "IPv4", 00:17:20.041 "traddr": "10.0.0.1", 00:17:20.041 "trsvcid": "37718" 00:17:20.041 }, 00:17:20.041 "auth": { 00:17:20.041 "state": "completed", 00:17:20.041 "digest": "sha256", 00:17:20.041 "dhgroup": "ffdhe8192" 00:17:20.041 } 00:17:20.041 } 00:17:20.041 ]' 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.041 19:09:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.607 19:09:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:21.980 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.238 19:09:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.613 00:17:23.613 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.613 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.613 19:09:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.613 { 00:17:23.613 "cntlid": 47, 00:17:23.613 "qid": 0, 00:17:23.613 "state": "enabled", 00:17:23.613 "thread": "nvmf_tgt_poll_group_000", 00:17:23.613 "listen_address": { 00:17:23.613 "trtype": "TCP", 00:17:23.613 "adrfam": "IPv4", 00:17:23.613 "traddr": "10.0.0.2", 00:17:23.613 "trsvcid": "4420" 00:17:23.613 }, 00:17:23.613 "peer_address": { 00:17:23.613 "trtype": "TCP", 00:17:23.613 "adrfam": "IPv4", 00:17:23.613 "traddr": "10.0.0.1", 00:17:23.613 "trsvcid": "37752" 00:17:23.613 }, 00:17:23.613 "auth": { 00:17:23.613 "state": "completed", 00:17:23.613 "digest": "sha256", 00:17:23.613 "dhgroup": "ffdhe8192" 00:17:23.613 } 00:17:23.613 } 00:17:23.613 ]' 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.613 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.871 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.871 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.871 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.871 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.871 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.438 19:09:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.814 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.071 00:17:26.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.329 19:09:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.587 { 00:17:26.587 "cntlid": 49, 00:17:26.587 "qid": 0, 00:17:26.587 "state": "enabled", 00:17:26.587 "thread": "nvmf_tgt_poll_group_000", 00:17:26.587 "listen_address": { 00:17:26.587 "trtype": "TCP", 00:17:26.587 "adrfam": "IPv4", 00:17:26.587 "traddr": "10.0.0.2", 00:17:26.587 "trsvcid": "4420" 00:17:26.587 }, 00:17:26.587 "peer_address": { 00:17:26.587 "trtype": "TCP", 00:17:26.587 "adrfam": "IPv4", 00:17:26.587 "traddr": "10.0.0.1", 00:17:26.587 "trsvcid": "36108" 00:17:26.587 }, 00:17:26.587 "auth": { 00:17:26.587 "state": "completed", 00:17:26.587 "digest": "sha384", 00:17:26.587 "dhgroup": "null" 00:17:26.587 } 00:17:26.587 } 00:17:26.587 ]' 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.587 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.153 19:09:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:17:28.525 19:09:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.525 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:28.525 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.525 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.525 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.525 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.525 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.525 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.095 19:09:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.675 00:17:29.675 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.675 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.675 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.933 { 00:17:29.933 "cntlid": 51, 00:17:29.933 "qid": 0, 00:17:29.933 "state": "enabled", 00:17:29.933 "thread": "nvmf_tgt_poll_group_000", 00:17:29.933 "listen_address": { 00:17:29.933 "trtype": "TCP", 00:17:29.933 "adrfam": "IPv4", 00:17:29.933 "traddr": "10.0.0.2", 00:17:29.933 "trsvcid": "4420" 00:17:29.933 }, 00:17:29.933 "peer_address": { 00:17:29.933 "trtype": "TCP", 00:17:29.933 "adrfam": "IPv4", 00:17:29.933 "traddr": "10.0.0.1", 00:17:29.933 "trsvcid": "36124" 00:17:29.933 }, 00:17:29.933 "auth": { 00:17:29.933 "state": "completed", 00:17:29.933 "digest": "sha384", 00:17:29.933 "dhgroup": "null" 00:17:29.933 } 00:17:29.933 } 00:17:29.933 ]' 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.933 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.191 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:30.191 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.191 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.191 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.191 19:09:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.757 19:09:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:17:32.131 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.131 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:32.131 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.131 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.131 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.131 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.131 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:32.132 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.390 19:09:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.957 00:17:32.957 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.957 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.957 19:09:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.524 { 00:17:33.524 "cntlid": 53, 00:17:33.524 "qid": 0, 00:17:33.524 "state": "enabled", 00:17:33.524 "thread": "nvmf_tgt_poll_group_000", 00:17:33.524 "listen_address": { 00:17:33.524 "trtype": "TCP", 00:17:33.524 "adrfam": "IPv4", 00:17:33.524 "traddr": "10.0.0.2", 00:17:33.524 "trsvcid": "4420" 00:17:33.524 }, 00:17:33.524 "peer_address": { 00:17:33.524 "trtype": "TCP", 00:17:33.524 "adrfam": "IPv4", 00:17:33.524 "traddr": "10.0.0.1", 00:17:33.524 "trsvcid": "36148" 00:17:33.524 }, 00:17:33.524 "auth": { 00:17:33.524 "state": "completed", 00:17:33.524 "digest": "sha384", 00:17:33.524 "dhgroup": "null" 00:17:33.524 } 00:17:33.524 } 00:17:33.524 ]' 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:33.524 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.783 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.783 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.783 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.041 19:09:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.415 19:09:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.673 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.242 00:17:36.242 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.242 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.242 19:09:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.810 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.810 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.810 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.810 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.810 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.810 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.810 { 00:17:36.810 "cntlid": 55, 00:17:36.810 "qid": 0, 00:17:36.810 "state": "enabled", 00:17:36.810 "thread": "nvmf_tgt_poll_group_000", 00:17:36.810 "listen_address": { 00:17:36.810 "trtype": "TCP", 00:17:36.810 "adrfam": "IPv4", 00:17:36.810 "traddr": "10.0.0.2", 00:17:36.810 "trsvcid": "4420" 00:17:36.810 }, 00:17:36.810 "peer_address": { 00:17:36.810 "trtype": "TCP", 00:17:36.810 "adrfam": "IPv4", 00:17:36.811 "traddr": "10.0.0.1", 00:17:36.811 "trsvcid": "52850" 00:17:36.811 }, 00:17:36.811 "auth": { 00:17:36.811 "state": "completed", 00:17:36.811 "digest": "sha384", 00:17:36.811 "dhgroup": "null" 00:17:36.811 } 00:17:36.811 } 00:17:36.811 ]' 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.811 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.070 19:09:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.449 19:09:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.709 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.275 00:17:39.275 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.275 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.275 19:09:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.843 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.843 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.843 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.843 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.843 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.843 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.843 { 00:17:39.843 "cntlid": 57, 00:17:39.843 "qid": 0, 00:17:39.844 "state": "enabled", 00:17:39.844 "thread": "nvmf_tgt_poll_group_000", 00:17:39.844 "listen_address": { 00:17:39.844 "trtype": "TCP", 00:17:39.844 "adrfam": "IPv4", 00:17:39.844 "traddr": "10.0.0.2", 00:17:39.844 "trsvcid": "4420" 00:17:39.844 }, 00:17:39.844 "peer_address": { 00:17:39.844 "trtype": "TCP", 00:17:39.844 "adrfam": "IPv4", 00:17:39.844 "traddr": "10.0.0.1", 00:17:39.844 "trsvcid": "52874" 00:17:39.844 }, 00:17:39.844 "auth": { 00:17:39.844 "state": "completed", 00:17:39.844 "digest": "sha384", 00:17:39.844 "dhgroup": "ffdhe2048" 00:17:39.844 } 00:17:39.844 } 00:17:39.844 ]' 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.844 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.411 19:09:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:41.790 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.049 19:09:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.617 00:17:42.617 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.617 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.617 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.876 { 00:17:42.876 "cntlid": 59, 00:17:42.876 "qid": 0, 00:17:42.876 "state": "enabled", 00:17:42.876 "thread": "nvmf_tgt_poll_group_000", 00:17:42.876 "listen_address": { 00:17:42.876 "trtype": "TCP", 00:17:42.876 "adrfam": "IPv4", 00:17:42.876 "traddr": "10.0.0.2", 00:17:42.876 "trsvcid": "4420" 00:17:42.876 }, 00:17:42.876 "peer_address": { 00:17:42.876 "trtype": "TCP", 00:17:42.876 "adrfam": "IPv4", 00:17:42.876 "traddr": "10.0.0.1", 00:17:42.876 "trsvcid": "52910" 00:17:42.876 }, 00:17:42.876 "auth": { 00:17:42.876 "state": "completed", 00:17:42.876 "digest": "sha384", 00:17:42.876 "dhgroup": "ffdhe2048" 00:17:42.876 } 00:17:42.876 } 00:17:42.876 ]' 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.876 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.135 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.135 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.136 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.136 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.136 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.396 19:09:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.776 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.381 19:09:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.948 00:17:45.948 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.948 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.948 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.206 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.206 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.206 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.206 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.206 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.206 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.206 { 00:17:46.206 "cntlid": 61, 00:17:46.206 "qid": 0, 00:17:46.206 "state": "enabled", 00:17:46.206 "thread": "nvmf_tgt_poll_group_000", 00:17:46.206 "listen_address": { 00:17:46.206 "trtype": "TCP", 00:17:46.206 "adrfam": "IPv4", 00:17:46.206 "traddr": "10.0.0.2", 00:17:46.206 "trsvcid": "4420" 00:17:46.206 }, 00:17:46.206 "peer_address": { 00:17:46.206 "trtype": "TCP", 00:17:46.206 "adrfam": "IPv4", 00:17:46.206 "traddr": "10.0.0.1", 00:17:46.206 "trsvcid": "37154" 00:17:46.206 }, 00:17:46.206 "auth": { 00:17:46.206 "state": "completed", 00:17:46.206 "digest": "sha384", 00:17:46.207 "dhgroup": "ffdhe2048" 00:17:46.207 } 00:17:46.207 } 00:17:46.207 ]' 00:17:46.207 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.207 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.207 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.465 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.465 19:09:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.465 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.465 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.465 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.032 19:09:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:48.407 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.974 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.975 19:09:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.543 00:17:49.543 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.543 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.543 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.802 { 00:17:49.802 "cntlid": 63, 00:17:49.802 "qid": 0, 00:17:49.802 "state": "enabled", 00:17:49.802 "thread": "nvmf_tgt_poll_group_000", 00:17:49.802 "listen_address": { 00:17:49.802 "trtype": "TCP", 00:17:49.802 "adrfam": "IPv4", 00:17:49.802 "traddr": "10.0.0.2", 00:17:49.802 "trsvcid": "4420" 00:17:49.802 }, 00:17:49.802 "peer_address": { 00:17:49.802 "trtype": "TCP", 00:17:49.802 "adrfam": "IPv4", 00:17:49.802 "traddr": "10.0.0.1", 00:17:49.802 "trsvcid": "37186" 00:17:49.802 }, 00:17:49.802 "auth": { 00:17:49.802 "state": "completed", 00:17:49.802 "digest": "sha384", 00:17:49.802 "dhgroup": "ffdhe2048" 00:17:49.802 } 00:17:49.802 } 00:17:49.802 ]' 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.802 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.060 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.061 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.061 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.320 19:09:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.696 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.955 19:09:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.521 00:17:52.521 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.521 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.521 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.779 { 00:17:52.779 "cntlid": 65, 00:17:52.779 "qid": 0, 00:17:52.779 "state": "enabled", 00:17:52.779 "thread": "nvmf_tgt_poll_group_000", 00:17:52.779 "listen_address": { 00:17:52.779 "trtype": "TCP", 00:17:52.779 "adrfam": "IPv4", 00:17:52.779 "traddr": "10.0.0.2", 00:17:52.779 "trsvcid": "4420" 00:17:52.779 }, 00:17:52.779 "peer_address": { 00:17:52.779 "trtype": "TCP", 00:17:52.779 "adrfam": "IPv4", 00:17:52.779 "traddr": "10.0.0.1", 00:17:52.779 "trsvcid": "37210" 00:17:52.779 }, 00:17:52.779 "auth": { 00:17:52.779 "state": "completed", 00:17:52.779 "digest": "sha384", 00:17:52.779 "dhgroup": "ffdhe3072" 00:17:52.779 } 00:17:52.779 } 00:17:52.779 ]' 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.779 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.038 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.038 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.038 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.038 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.038 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.296 19:09:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:54.670 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.928 19:10:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.494 00:17:55.494 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.494 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.494 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.059 { 00:17:56.059 "cntlid": 67, 00:17:56.059 "qid": 0, 00:17:56.059 "state": "enabled", 00:17:56.059 "thread": "nvmf_tgt_poll_group_000", 00:17:56.059 "listen_address": { 00:17:56.059 "trtype": "TCP", 00:17:56.059 "adrfam": "IPv4", 00:17:56.059 "traddr": "10.0.0.2", 00:17:56.059 "trsvcid": "4420" 00:17:56.059 }, 00:17:56.059 "peer_address": { 00:17:56.059 "trtype": "TCP", 00:17:56.059 "adrfam": "IPv4", 00:17:56.059 "traddr": "10.0.0.1", 00:17:56.059 "trsvcid": "41762" 00:17:56.059 }, 00:17:56.059 "auth": { 00:17:56.059 "state": "completed", 00:17:56.059 "digest": "sha384", 00:17:56.059 "dhgroup": "ffdhe3072" 00:17:56.059 } 00:17:56.059 } 00:17:56.059 ]' 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.059 19:10:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.624 19:10:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:17:57.556 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.814 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:57.814 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.814 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.814 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.814 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.814 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:57.814 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.072 19:10:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.637 00:17:58.637 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.637 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.637 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.894 { 00:17:58.894 "cntlid": 69, 00:17:58.894 "qid": 0, 00:17:58.894 "state": "enabled", 00:17:58.894 "thread": "nvmf_tgt_poll_group_000", 00:17:58.894 "listen_address": { 00:17:58.894 "trtype": "TCP", 00:17:58.894 "adrfam": "IPv4", 00:17:58.894 "traddr": "10.0.0.2", 00:17:58.894 "trsvcid": "4420" 00:17:58.894 }, 00:17:58.894 "peer_address": { 00:17:58.894 "trtype": "TCP", 00:17:58.894 "adrfam": "IPv4", 00:17:58.894 "traddr": "10.0.0.1", 00:17:58.894 "trsvcid": "41790" 00:17:58.894 }, 00:17:58.894 "auth": { 00:17:58.894 "state": "completed", 00:17:58.894 "digest": "sha384", 00:17:58.894 "dhgroup": "ffdhe3072" 00:17:58.894 } 00:17:58.894 } 00:17:58.894 ]' 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.894 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.485 19:10:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:00.419 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.986 19:10:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.553 00:18:01.553 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.553 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.553 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.812 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.812 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.812 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.812 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.812 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.070 { 00:18:02.070 "cntlid": 71, 00:18:02.070 "qid": 0, 00:18:02.070 "state": "enabled", 00:18:02.070 "thread": "nvmf_tgt_poll_group_000", 00:18:02.070 "listen_address": { 00:18:02.070 "trtype": "TCP", 00:18:02.070 "adrfam": "IPv4", 00:18:02.070 "traddr": "10.0.0.2", 00:18:02.070 "trsvcid": "4420" 00:18:02.070 }, 00:18:02.070 "peer_address": { 00:18:02.070 "trtype": "TCP", 00:18:02.070 "adrfam": "IPv4", 00:18:02.070 "traddr": "10.0.0.1", 00:18:02.070 "trsvcid": "41812" 00:18:02.070 }, 00:18:02.070 "auth": { 00:18:02.070 "state": "completed", 00:18:02.070 "digest": "sha384", 00:18:02.070 "dhgroup": "ffdhe3072" 00:18:02.070 } 00:18:02.070 } 00:18:02.070 ]' 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.070 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.328 19:10:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.703 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.269 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.270 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.270 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.270 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.270 19:10:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.836 00:18:04.836 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.836 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.836 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.401 { 00:18:05.401 "cntlid": 73, 00:18:05.401 "qid": 0, 00:18:05.401 "state": "enabled", 00:18:05.401 "thread": "nvmf_tgt_poll_group_000", 00:18:05.401 "listen_address": { 00:18:05.401 "trtype": "TCP", 00:18:05.401 "adrfam": "IPv4", 00:18:05.401 "traddr": "10.0.0.2", 00:18:05.401 "trsvcid": "4420" 00:18:05.401 }, 00:18:05.401 "peer_address": { 00:18:05.401 "trtype": "TCP", 00:18:05.401 "adrfam": "IPv4", 00:18:05.401 "traddr": "10.0.0.1", 00:18:05.401 "trsvcid": "49528" 00:18:05.401 }, 00:18:05.401 "auth": { 00:18:05.401 "state": "completed", 00:18:05.401 "digest": "sha384", 00:18:05.401 "dhgroup": "ffdhe4096" 00:18:05.401 } 00:18:05.401 } 00:18:05.401 ]' 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.401 19:10:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.659 19:10:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.033 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.292 19:10:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.861 00:18:07.861 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.861 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.861 19:10:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.429 { 00:18:08.429 "cntlid": 75, 00:18:08.429 "qid": 0, 00:18:08.429 "state": "enabled", 00:18:08.429 "thread": "nvmf_tgt_poll_group_000", 00:18:08.429 "listen_address": { 00:18:08.429 "trtype": "TCP", 00:18:08.429 "adrfam": "IPv4", 00:18:08.429 "traddr": "10.0.0.2", 00:18:08.429 "trsvcid": "4420" 00:18:08.429 }, 00:18:08.429 "peer_address": { 00:18:08.429 "trtype": "TCP", 00:18:08.429 "adrfam": "IPv4", 00:18:08.429 "traddr": "10.0.0.1", 00:18:08.429 "trsvcid": "49546" 00:18:08.429 }, 00:18:08.429 "auth": { 00:18:08.429 "state": "completed", 00:18:08.429 "digest": "sha384", 00:18:08.429 "dhgroup": "ffdhe4096" 00:18:08.429 } 00:18:08.429 } 00:18:08.429 ]' 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.429 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.687 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.687 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.687 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.687 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.687 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.254 19:10:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.189 19:10:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.756 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.013 00:18:11.272 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.272 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.272 19:10:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.839 { 00:18:11.839 "cntlid": 77, 00:18:11.839 "qid": 0, 00:18:11.839 "state": "enabled", 00:18:11.839 "thread": "nvmf_tgt_poll_group_000", 00:18:11.839 "listen_address": { 00:18:11.839 "trtype": "TCP", 00:18:11.839 "adrfam": "IPv4", 00:18:11.839 "traddr": "10.0.0.2", 00:18:11.839 "trsvcid": "4420" 00:18:11.839 }, 00:18:11.839 "peer_address": { 00:18:11.839 "trtype": "TCP", 00:18:11.839 "adrfam": "IPv4", 00:18:11.839 "traddr": "10.0.0.1", 00:18:11.839 "trsvcid": "49574" 00:18:11.839 }, 00:18:11.839 "auth": { 00:18:11.839 "state": "completed", 00:18:11.839 "digest": "sha384", 00:18:11.839 "dhgroup": "ffdhe4096" 00:18:11.839 } 00:18:11.839 } 00:18:11.839 ]' 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.839 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.097 19:10:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:18:13.472 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.730 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:13.730 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.730 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.730 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.730 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.730 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:13.730 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.325 19:10:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.584 00:18:14.584 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.584 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.584 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.151 { 00:18:15.151 "cntlid": 79, 00:18:15.151 "qid": 0, 00:18:15.151 "state": "enabled", 00:18:15.151 "thread": "nvmf_tgt_poll_group_000", 00:18:15.151 "listen_address": { 00:18:15.151 "trtype": "TCP", 00:18:15.151 "adrfam": "IPv4", 00:18:15.151 "traddr": "10.0.0.2", 00:18:15.151 "trsvcid": "4420" 00:18:15.151 }, 00:18:15.151 "peer_address": { 00:18:15.151 "trtype": "TCP", 00:18:15.151 "adrfam": "IPv4", 00:18:15.151 "traddr": "10.0.0.1", 00:18:15.151 "trsvcid": "38094" 00:18:15.151 }, 00:18:15.151 "auth": { 00:18:15.151 "state": "completed", 00:18:15.151 "digest": "sha384", 00:18:15.151 "dhgroup": "ffdhe4096" 00:18:15.151 } 00:18:15.151 } 00:18:15.151 ]' 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.151 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.152 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.152 19:10:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.718 19:10:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.092 19:10:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.350 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.284 00:18:18.284 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.284 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.284 19:10:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.542 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.542 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.542 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.542 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.542 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.542 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.542 { 00:18:18.542 "cntlid": 81, 00:18:18.542 "qid": 0, 00:18:18.542 "state": "enabled", 00:18:18.542 "thread": "nvmf_tgt_poll_group_000", 00:18:18.542 "listen_address": { 00:18:18.542 "trtype": "TCP", 00:18:18.542 "adrfam": "IPv4", 00:18:18.542 "traddr": "10.0.0.2", 00:18:18.542 "trsvcid": "4420" 00:18:18.542 }, 00:18:18.542 "peer_address": { 00:18:18.542 "trtype": "TCP", 00:18:18.542 "adrfam": "IPv4", 00:18:18.542 "traddr": "10.0.0.1", 00:18:18.542 "trsvcid": "38120" 00:18:18.542 }, 00:18:18.542 "auth": { 00:18:18.542 "state": "completed", 00:18:18.542 "digest": "sha384", 00:18:18.542 "dhgroup": "ffdhe6144" 00:18:18.542 } 00:18:18.542 } 00:18:18.542 ]' 00:18:18.542 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.801 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.801 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.801 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.801 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.801 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.801 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.801 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.059 19:10:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:18:20.450 19:10:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.450 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.450 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.450 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.450 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.450 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.450 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.450 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.709 19:10:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.644 00:18:21.644 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.644 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.644 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.903 { 00:18:21.903 "cntlid": 83, 00:18:21.903 "qid": 0, 00:18:21.903 "state": "enabled", 00:18:21.903 "thread": "nvmf_tgt_poll_group_000", 00:18:21.903 "listen_address": { 00:18:21.903 "trtype": "TCP", 00:18:21.903 "adrfam": "IPv4", 00:18:21.903 "traddr": "10.0.0.2", 00:18:21.903 "trsvcid": "4420" 00:18:21.903 }, 00:18:21.903 "peer_address": { 00:18:21.903 "trtype": "TCP", 00:18:21.903 "adrfam": "IPv4", 00:18:21.903 "traddr": "10.0.0.1", 00:18:21.903 "trsvcid": "38136" 00:18:21.903 }, 00:18:21.903 "auth": { 00:18:21.903 "state": "completed", 00:18:21.903 "digest": "sha384", 00:18:21.903 "dhgroup": "ffdhe6144" 00:18:21.903 } 00:18:21.903 } 00:18:21.903 ]' 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.903 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.161 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.161 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.161 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.161 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.161 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.419 19:10:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.793 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.052 19:10:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.987 00:18:24.987 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.987 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.987 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.553 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.553 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.554 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.554 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.554 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.554 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.554 { 00:18:25.554 "cntlid": 85, 00:18:25.554 "qid": 0, 00:18:25.554 "state": "enabled", 00:18:25.554 "thread": "nvmf_tgt_poll_group_000", 00:18:25.554 "listen_address": { 00:18:25.554 "trtype": "TCP", 00:18:25.554 "adrfam": "IPv4", 00:18:25.554 "traddr": "10.0.0.2", 00:18:25.554 "trsvcid": "4420" 00:18:25.554 }, 00:18:25.554 "peer_address": { 00:18:25.554 "trtype": "TCP", 00:18:25.554 "adrfam": "IPv4", 00:18:25.554 "traddr": "10.0.0.1", 00:18:25.554 "trsvcid": "58688" 00:18:25.554 }, 00:18:25.554 "auth": { 00:18:25.554 "state": "completed", 00:18:25.554 "digest": "sha384", 00:18:25.554 "dhgroup": "ffdhe6144" 00:18:25.554 } 00:18:25.554 } 00:18:25.554 ]' 00:18:25.554 19:10:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.554 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.554 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.554 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.554 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.554 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.554 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.554 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.812 19:10:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.187 19:10:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.446 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.012 00:18:28.012 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.012 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.012 19:10:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.585 { 00:18:28.585 "cntlid": 87, 00:18:28.585 "qid": 0, 00:18:28.585 "state": "enabled", 00:18:28.585 "thread": "nvmf_tgt_poll_group_000", 00:18:28.585 "listen_address": { 00:18:28.585 "trtype": "TCP", 00:18:28.585 "adrfam": "IPv4", 00:18:28.585 "traddr": "10.0.0.2", 00:18:28.585 "trsvcid": "4420" 00:18:28.585 }, 00:18:28.585 "peer_address": { 00:18:28.585 "trtype": "TCP", 00:18:28.585 "adrfam": "IPv4", 00:18:28.585 "traddr": "10.0.0.1", 00:18:28.585 "trsvcid": "58714" 00:18:28.585 }, 00:18:28.585 "auth": { 00:18:28.585 "state": "completed", 00:18:28.585 "digest": "sha384", 00:18:28.585 "dhgroup": "ffdhe6144" 00:18:28.585 } 00:18:28.585 } 00:18:28.585 ]' 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.585 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.159 19:10:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.531 19:10:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.788 19:10:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.160 00:18:32.160 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.160 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.160 19:10:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.418 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.418 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.418 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.418 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.418 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.418 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.418 { 00:18:32.418 "cntlid": 89, 00:18:32.418 "qid": 0, 00:18:32.418 "state": "enabled", 00:18:32.418 "thread": "nvmf_tgt_poll_group_000", 00:18:32.418 "listen_address": { 00:18:32.418 "trtype": "TCP", 00:18:32.418 "adrfam": "IPv4", 00:18:32.418 "traddr": "10.0.0.2", 00:18:32.418 "trsvcid": "4420" 00:18:32.418 }, 00:18:32.418 "peer_address": { 00:18:32.418 "trtype": "TCP", 00:18:32.418 "adrfam": "IPv4", 00:18:32.418 "traddr": "10.0.0.1", 00:18:32.418 "trsvcid": "58732" 00:18:32.418 }, 00:18:32.418 "auth": { 00:18:32.418 "state": "completed", 00:18:32.418 "digest": "sha384", 00:18:32.418 "dhgroup": "ffdhe8192" 00:18:32.418 } 00:18:32.418 } 00:18:32.418 ]' 00:18:32.418 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.676 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.676 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.676 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.676 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.676 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.676 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.676 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.934 19:10:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.308 19:10:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.877 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.878 19:10:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.811 00:18:35.811 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.811 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.811 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.377 { 00:18:36.377 "cntlid": 91, 00:18:36.377 "qid": 0, 00:18:36.377 "state": "enabled", 00:18:36.377 "thread": "nvmf_tgt_poll_group_000", 00:18:36.377 "listen_address": { 00:18:36.377 "trtype": "TCP", 00:18:36.377 "adrfam": "IPv4", 00:18:36.377 "traddr": "10.0.0.2", 00:18:36.377 "trsvcid": "4420" 00:18:36.377 }, 00:18:36.377 "peer_address": { 00:18:36.377 "trtype": "TCP", 00:18:36.377 "adrfam": "IPv4", 00:18:36.377 "traddr": "10.0.0.1", 00:18:36.377 "trsvcid": "58340" 00:18:36.377 }, 00:18:36.377 "auth": { 00:18:36.377 "state": "completed", 00:18:36.377 "digest": "sha384", 00:18:36.377 "dhgroup": "ffdhe8192" 00:18:36.377 } 00:18:36.377 } 00:18:36.377 ]' 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.377 19:10:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.635 19:10:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.009 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:38.266 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:38.266 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.266 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:38.266 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:38.266 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:38.266 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.266 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.267 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.267 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.267 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.267 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.267 19:10:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.641 00:18:39.641 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.641 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.641 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.899 { 00:18:39.899 "cntlid": 93, 00:18:39.899 "qid": 0, 00:18:39.899 "state": "enabled", 00:18:39.899 "thread": "nvmf_tgt_poll_group_000", 00:18:39.899 "listen_address": { 00:18:39.899 "trtype": "TCP", 00:18:39.899 "adrfam": "IPv4", 00:18:39.899 "traddr": "10.0.0.2", 00:18:39.899 "trsvcid": "4420" 00:18:39.899 }, 00:18:39.899 "peer_address": { 00:18:39.899 "trtype": "TCP", 00:18:39.899 "adrfam": "IPv4", 00:18:39.899 "traddr": "10.0.0.1", 00:18:39.899 "trsvcid": "58354" 00:18:39.899 }, 00:18:39.899 "auth": { 00:18:39.899 "state": "completed", 00:18:39.899 "digest": "sha384", 00:18:39.899 "dhgroup": "ffdhe8192" 00:18:39.899 } 00:18:39.899 } 00:18:39.899 ]' 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.899 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.157 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.158 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.158 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.158 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.158 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.158 19:10:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.416 19:10:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:41.790 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.049 19:10:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.982 00:18:42.982 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.982 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.982 19:10:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.547 { 00:18:43.547 "cntlid": 95, 00:18:43.547 "qid": 0, 00:18:43.547 "state": "enabled", 00:18:43.547 "thread": "nvmf_tgt_poll_group_000", 00:18:43.547 "listen_address": { 00:18:43.547 "trtype": "TCP", 00:18:43.547 "adrfam": "IPv4", 00:18:43.547 "traddr": "10.0.0.2", 00:18:43.547 "trsvcid": "4420" 00:18:43.547 }, 00:18:43.547 "peer_address": { 00:18:43.547 "trtype": "TCP", 00:18:43.547 "adrfam": "IPv4", 00:18:43.547 "traddr": "10.0.0.1", 00:18:43.547 "trsvcid": "58376" 00:18:43.547 }, 00:18:43.547 "auth": { 00:18:43.547 "state": "completed", 00:18:43.547 "digest": "sha384", 00:18:43.547 "dhgroup": "ffdhe8192" 00:18:43.547 } 00:18:43.547 } 00:18:43.547 ]' 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.547 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.806 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.806 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.806 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.082 19:10:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.468 19:10:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.468 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.034 00:18:46.034 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.034 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.034 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.292 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.292 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.292 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.292 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.549 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.549 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.549 { 00:18:46.549 "cntlid": 97, 00:18:46.549 "qid": 0, 00:18:46.549 "state": "enabled", 00:18:46.549 "thread": "nvmf_tgt_poll_group_000", 00:18:46.549 "listen_address": { 00:18:46.549 "trtype": "TCP", 00:18:46.549 "adrfam": "IPv4", 00:18:46.549 "traddr": "10.0.0.2", 00:18:46.549 "trsvcid": "4420" 00:18:46.549 }, 00:18:46.549 "peer_address": { 00:18:46.549 "trtype": "TCP", 00:18:46.549 "adrfam": "IPv4", 00:18:46.549 "traddr": "10.0.0.1", 00:18:46.549 "trsvcid": "42250" 00:18:46.549 }, 00:18:46.549 "auth": { 00:18:46.549 "state": "completed", 00:18:46.549 "digest": "sha512", 00:18:46.549 "dhgroup": "null" 00:18:46.549 } 00:18:46.549 } 00:18:46.549 ]' 00:18:46.549 19:10:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.549 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.549 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.550 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:46.550 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.807 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.807 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.807 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.065 19:10:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:18:48.438 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.438 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:48.438 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.438 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.438 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.439 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.439 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:48.439 19:10:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.696 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.261 00:18:49.261 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.261 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.261 19:10:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.826 { 00:18:49.826 "cntlid": 99, 00:18:49.826 "qid": 0, 00:18:49.826 "state": "enabled", 00:18:49.826 "thread": "nvmf_tgt_poll_group_000", 00:18:49.826 "listen_address": { 00:18:49.826 "trtype": "TCP", 00:18:49.826 "adrfam": "IPv4", 00:18:49.826 "traddr": "10.0.0.2", 00:18:49.826 "trsvcid": "4420" 00:18:49.826 }, 00:18:49.826 "peer_address": { 00:18:49.826 "trtype": "TCP", 00:18:49.826 "adrfam": "IPv4", 00:18:49.826 "traddr": "10.0.0.1", 00:18:49.826 "trsvcid": "42278" 00:18:49.826 }, 00:18:49.826 "auth": { 00:18:49.826 "state": "completed", 00:18:49.826 "digest": "sha512", 00:18:49.826 "dhgroup": "null" 00:18:49.826 } 00:18:49.826 } 00:18:49.826 ]' 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.826 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.084 19:10:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:51.457 19:10:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.024 19:10:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.590 00:18:52.590 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.590 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.590 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.848 { 00:18:52.848 "cntlid": 101, 00:18:52.848 "qid": 0, 00:18:52.848 "state": "enabled", 00:18:52.848 "thread": "nvmf_tgt_poll_group_000", 00:18:52.848 "listen_address": { 00:18:52.848 "trtype": "TCP", 00:18:52.848 "adrfam": "IPv4", 00:18:52.848 "traddr": "10.0.0.2", 00:18:52.848 "trsvcid": "4420" 00:18:52.848 }, 00:18:52.848 "peer_address": { 00:18:52.848 "trtype": "TCP", 00:18:52.848 "adrfam": "IPv4", 00:18:52.848 "traddr": "10.0.0.1", 00:18:52.848 "trsvcid": "42296" 00:18:52.848 }, 00:18:52.848 "auth": { 00:18:52.848 "state": "completed", 00:18:52.848 "digest": "sha512", 00:18:52.848 "dhgroup": "null" 00:18:52.848 } 00:18:52.848 } 00:18:52.848 ]' 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:52.848 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.849 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.849 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.849 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.415 19:10:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:18:54.349 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.608 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:54.608 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.608 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.608 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.608 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.608 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:54.608 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.175 19:11:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.741 00:18:55.741 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.741 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.741 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.308 { 00:18:56.308 "cntlid": 103, 00:18:56.308 "qid": 0, 00:18:56.308 "state": "enabled", 00:18:56.308 "thread": "nvmf_tgt_poll_group_000", 00:18:56.308 "listen_address": { 00:18:56.308 "trtype": "TCP", 00:18:56.308 "adrfam": "IPv4", 00:18:56.308 "traddr": "10.0.0.2", 00:18:56.308 "trsvcid": "4420" 00:18:56.308 }, 00:18:56.308 "peer_address": { 00:18:56.308 "trtype": "TCP", 00:18:56.308 "adrfam": "IPv4", 00:18:56.308 "traddr": "10.0.0.1", 00:18:56.308 "trsvcid": "43800" 00:18:56.308 }, 00:18:56.308 "auth": { 00:18:56.308 "state": "completed", 00:18:56.308 "digest": "sha512", 00:18:56.308 "dhgroup": "null" 00:18:56.308 } 00:18:56.308 } 00:18:56.308 ]' 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.308 19:11:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.873 19:11:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.248 19:11:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.836 00:18:58.836 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.836 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.836 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.102 { 00:18:59.102 "cntlid": 105, 00:18:59.102 "qid": 0, 00:18:59.102 "state": "enabled", 00:18:59.102 "thread": "nvmf_tgt_poll_group_000", 00:18:59.102 "listen_address": { 00:18:59.102 "trtype": "TCP", 00:18:59.102 "adrfam": "IPv4", 00:18:59.102 "traddr": "10.0.0.2", 00:18:59.102 "trsvcid": "4420" 00:18:59.102 }, 00:18:59.102 "peer_address": { 00:18:59.102 "trtype": "TCP", 00:18:59.102 "adrfam": "IPv4", 00:18:59.102 "traddr": "10.0.0.1", 00:18:59.102 "trsvcid": "43846" 00:18:59.102 }, 00:18:59.102 "auth": { 00:18:59.102 "state": "completed", 00:18:59.102 "digest": "sha512", 00:18:59.102 "dhgroup": "ffdhe2048" 00:18:59.102 } 00:18:59.102 } 00:18:59.102 ]' 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.102 19:11:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.668 19:11:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.041 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.298 19:11:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.555 00:19:01.555 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.555 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.555 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.813 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.813 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.813 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.813 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.070 { 00:19:02.070 "cntlid": 107, 00:19:02.070 "qid": 0, 00:19:02.070 "state": "enabled", 00:19:02.070 "thread": "nvmf_tgt_poll_group_000", 00:19:02.070 "listen_address": { 00:19:02.070 "trtype": "TCP", 00:19:02.070 "adrfam": "IPv4", 00:19:02.070 "traddr": "10.0.0.2", 00:19:02.070 "trsvcid": "4420" 00:19:02.070 }, 00:19:02.070 "peer_address": { 00:19:02.070 "trtype": "TCP", 00:19:02.070 "adrfam": "IPv4", 00:19:02.070 "traddr": "10.0.0.1", 00:19:02.070 "trsvcid": "43876" 00:19:02.070 }, 00:19:02.070 "auth": { 00:19:02.070 "state": "completed", 00:19:02.070 "digest": "sha512", 00:19:02.070 "dhgroup": "ffdhe2048" 00:19:02.070 } 00:19:02.070 } 00:19:02.070 ]' 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.070 19:11:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.634 19:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.005 19:11:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.571 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.829 00:19:04.829 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.829 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.829 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.086 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.086 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.086 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.086 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.086 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.086 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.086 { 00:19:05.086 "cntlid": 109, 00:19:05.086 "qid": 0, 00:19:05.086 "state": "enabled", 00:19:05.086 "thread": "nvmf_tgt_poll_group_000", 00:19:05.086 "listen_address": { 00:19:05.086 "trtype": "TCP", 00:19:05.086 "adrfam": "IPv4", 00:19:05.086 "traddr": "10.0.0.2", 00:19:05.086 "trsvcid": "4420" 00:19:05.086 }, 00:19:05.086 "peer_address": { 00:19:05.086 "trtype": "TCP", 00:19:05.086 "adrfam": "IPv4", 00:19:05.086 "traddr": "10.0.0.1", 00:19:05.086 "trsvcid": "49880" 00:19:05.086 }, 00:19:05.086 "auth": { 00:19:05.086 "state": "completed", 00:19:05.087 "digest": "sha512", 00:19:05.087 "dhgroup": "ffdhe2048" 00:19:05.087 } 00:19:05.087 } 00:19:05.087 ]' 00:19:05.087 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.344 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.344 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.344 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.344 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.344 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.344 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.344 19:11:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.907 19:11:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.281 19:11:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.539 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.104 00:19:08.104 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.104 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.104 19:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.670 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.670 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.671 { 00:19:08.671 "cntlid": 111, 00:19:08.671 "qid": 0, 00:19:08.671 "state": "enabled", 00:19:08.671 "thread": "nvmf_tgt_poll_group_000", 00:19:08.671 "listen_address": { 00:19:08.671 "trtype": "TCP", 00:19:08.671 "adrfam": "IPv4", 00:19:08.671 "traddr": "10.0.0.2", 00:19:08.671 "trsvcid": "4420" 00:19:08.671 }, 00:19:08.671 "peer_address": { 00:19:08.671 "trtype": "TCP", 00:19:08.671 "adrfam": "IPv4", 00:19:08.671 "traddr": "10.0.0.1", 00:19:08.671 "trsvcid": "49916" 00:19:08.671 }, 00:19:08.671 "auth": { 00:19:08.671 "state": "completed", 00:19:08.671 "digest": "sha512", 00:19:08.671 "dhgroup": "ffdhe2048" 00:19:08.671 } 00:19:08.671 } 00:19:08.671 ]' 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.671 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.929 19:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.304 19:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.562 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.128 00:19:11.128 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.128 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.128 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.386 { 00:19:11.386 "cntlid": 113, 00:19:11.386 "qid": 0, 00:19:11.386 "state": "enabled", 00:19:11.386 "thread": "nvmf_tgt_poll_group_000", 00:19:11.386 "listen_address": { 00:19:11.386 "trtype": "TCP", 00:19:11.386 "adrfam": "IPv4", 00:19:11.386 "traddr": "10.0.0.2", 00:19:11.386 "trsvcid": "4420" 00:19:11.386 }, 00:19:11.386 "peer_address": { 00:19:11.386 "trtype": "TCP", 00:19:11.386 "adrfam": "IPv4", 00:19:11.386 "traddr": "10.0.0.1", 00:19:11.386 "trsvcid": "49942" 00:19:11.386 }, 00:19:11.386 "auth": { 00:19:11.386 "state": "completed", 00:19:11.386 "digest": "sha512", 00:19:11.386 "dhgroup": "ffdhe3072" 00:19:11.386 } 00:19:11.386 } 00:19:11.386 ]' 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.386 19:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.386 19:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.386 19:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.386 19:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.386 19:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.386 19:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.951 19:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.327 19:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.600 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.883 00:19:13.883 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.883 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.883 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.141 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.141 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.141 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.141 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.141 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.400 { 00:19:14.400 "cntlid": 115, 00:19:14.400 "qid": 0, 00:19:14.400 "state": "enabled", 00:19:14.400 "thread": "nvmf_tgt_poll_group_000", 00:19:14.400 "listen_address": { 00:19:14.400 "trtype": "TCP", 00:19:14.400 "adrfam": "IPv4", 00:19:14.400 "traddr": "10.0.0.2", 00:19:14.400 "trsvcid": "4420" 00:19:14.400 }, 00:19:14.400 "peer_address": { 00:19:14.400 "trtype": "TCP", 00:19:14.400 "adrfam": "IPv4", 00:19:14.400 "traddr": "10.0.0.1", 00:19:14.400 "trsvcid": "41414" 00:19:14.400 }, 00:19:14.400 "auth": { 00:19:14.400 "state": "completed", 00:19:14.400 "digest": "sha512", 00:19:14.400 "dhgroup": "ffdhe3072" 00:19:14.400 } 00:19:14.400 } 00:19:14.400 ]' 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.400 19:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.659 19:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.034 19:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.601 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.859 00:19:16.859 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.859 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.859 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.426 { 00:19:17.426 "cntlid": 117, 00:19:17.426 "qid": 0, 00:19:17.426 "state": "enabled", 00:19:17.426 "thread": "nvmf_tgt_poll_group_000", 00:19:17.426 "listen_address": { 00:19:17.426 "trtype": "TCP", 00:19:17.426 "adrfam": "IPv4", 00:19:17.426 "traddr": "10.0.0.2", 00:19:17.426 "trsvcid": "4420" 00:19:17.426 }, 00:19:17.426 "peer_address": { 00:19:17.426 "trtype": "TCP", 00:19:17.426 "adrfam": "IPv4", 00:19:17.426 "traddr": "10.0.0.1", 00:19:17.426 "trsvcid": "41430" 00:19:17.426 }, 00:19:17.426 "auth": { 00:19:17.426 "state": "completed", 00:19:17.426 "digest": "sha512", 00:19:17.426 "dhgroup": "ffdhe3072" 00:19:17.426 } 00:19:17.426 } 00:19:17.426 ]' 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.426 19:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.426 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.426 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.426 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.992 19:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.366 19:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.625 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.883 00:19:19.883 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.883 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.883 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.449 { 00:19:20.449 "cntlid": 119, 00:19:20.449 "qid": 0, 00:19:20.449 "state": "enabled", 00:19:20.449 "thread": "nvmf_tgt_poll_group_000", 00:19:20.449 "listen_address": { 00:19:20.449 "trtype": "TCP", 00:19:20.449 "adrfam": "IPv4", 00:19:20.449 "traddr": "10.0.0.2", 00:19:20.449 "trsvcid": "4420" 00:19:20.449 }, 00:19:20.449 "peer_address": { 00:19:20.449 "trtype": "TCP", 00:19:20.449 "adrfam": "IPv4", 00:19:20.449 "traddr": "10.0.0.1", 00:19:20.449 "trsvcid": "41444" 00:19:20.449 }, 00:19:20.449 "auth": { 00:19:20.449 "state": "completed", 00:19:20.449 "digest": "sha512", 00:19:20.449 "dhgroup": "ffdhe3072" 00:19:20.449 } 00:19:20.449 } 00:19:20.449 ]' 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.449 19:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.449 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.449 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.449 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.449 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.449 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.014 19:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:19:21.948 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.948 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:21.948 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.948 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.206 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.206 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.206 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.206 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.206 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.464 19:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.722 00:19:22.722 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.722 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.722 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.980 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.981 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.981 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.981 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.981 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.981 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.981 { 00:19:22.981 "cntlid": 121, 00:19:22.981 "qid": 0, 00:19:22.981 "state": "enabled", 00:19:22.981 "thread": "nvmf_tgt_poll_group_000", 00:19:22.981 "listen_address": { 00:19:22.981 "trtype": "TCP", 00:19:22.981 "adrfam": "IPv4", 00:19:22.981 "traddr": "10.0.0.2", 00:19:22.981 "trsvcid": "4420" 00:19:22.981 }, 00:19:22.981 "peer_address": { 00:19:22.981 "trtype": "TCP", 00:19:22.981 "adrfam": "IPv4", 00:19:22.981 "traddr": "10.0.0.1", 00:19:22.981 "trsvcid": "41476" 00:19:22.981 }, 00:19:22.981 "auth": { 00:19:22.981 "state": "completed", 00:19:22.981 "digest": "sha512", 00:19:22.981 "dhgroup": "ffdhe4096" 00:19:22.981 } 00:19:22.981 } 00:19:22.981 ]' 00:19:22.981 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.238 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.238 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.238 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.238 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.239 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.239 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.239 19:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.804 19:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.181 19:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.116 00:19:26.116 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.116 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.116 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.374 { 00:19:26.374 "cntlid": 123, 00:19:26.374 "qid": 0, 00:19:26.374 "state": "enabled", 00:19:26.374 "thread": "nvmf_tgt_poll_group_000", 00:19:26.374 "listen_address": { 00:19:26.374 "trtype": "TCP", 00:19:26.374 "adrfam": "IPv4", 00:19:26.374 "traddr": "10.0.0.2", 00:19:26.374 "trsvcid": "4420" 00:19:26.374 }, 00:19:26.374 "peer_address": { 00:19:26.374 "trtype": "TCP", 00:19:26.374 "adrfam": "IPv4", 00:19:26.374 "traddr": "10.0.0.1", 00:19:26.374 "trsvcid": "57894" 00:19:26.374 }, 00:19:26.374 "auth": { 00:19:26.374 "state": "completed", 00:19:26.374 "digest": "sha512", 00:19:26.374 "dhgroup": "ffdhe4096" 00:19:26.374 } 00:19:26.374 } 00:19:26.374 ]' 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.374 19:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.374 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.375 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.632 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.632 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.632 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.890 19:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.269 19:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.539 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.472 00:19:29.473 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.473 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.473 19:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.731 { 00:19:29.731 "cntlid": 125, 00:19:29.731 "qid": 0, 00:19:29.731 "state": "enabled", 00:19:29.731 "thread": "nvmf_tgt_poll_group_000", 00:19:29.731 "listen_address": { 00:19:29.731 "trtype": "TCP", 00:19:29.731 "adrfam": "IPv4", 00:19:29.731 "traddr": "10.0.0.2", 00:19:29.731 "trsvcid": "4420" 00:19:29.731 }, 00:19:29.731 "peer_address": { 00:19:29.731 "trtype": "TCP", 00:19:29.731 "adrfam": "IPv4", 00:19:29.731 "traddr": "10.0.0.1", 00:19:29.731 "trsvcid": "57928" 00:19:29.731 }, 00:19:29.731 "auth": { 00:19:29.731 "state": "completed", 00:19:29.731 "digest": "sha512", 00:19:29.731 "dhgroup": "ffdhe4096" 00:19:29.731 } 00:19:29.731 } 00:19:29.731 ]' 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.731 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.989 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.989 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.989 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.989 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.989 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.247 19:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:19:31.622 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.623 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:31.623 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.623 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.623 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.623 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.623 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.623 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.881 19:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:32.447 00:19:32.448 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.448 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.448 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.014 { 00:19:33.014 "cntlid": 127, 00:19:33.014 "qid": 0, 00:19:33.014 "state": "enabled", 00:19:33.014 "thread": "nvmf_tgt_poll_group_000", 00:19:33.014 "listen_address": { 00:19:33.014 "trtype": "TCP", 00:19:33.014 "adrfam": "IPv4", 00:19:33.014 "traddr": "10.0.0.2", 00:19:33.014 "trsvcid": "4420" 00:19:33.014 }, 00:19:33.014 "peer_address": { 00:19:33.014 "trtype": "TCP", 00:19:33.014 "adrfam": "IPv4", 00:19:33.014 "traddr": "10.0.0.1", 00:19:33.014 "trsvcid": "57936" 00:19:33.014 }, 00:19:33.014 "auth": { 00:19:33.014 "state": "completed", 00:19:33.014 "digest": "sha512", 00:19:33.014 "dhgroup": "ffdhe4096" 00:19:33.014 } 00:19:33.014 } 00:19:33.014 ]' 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.014 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.273 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.273 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.273 19:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.531 19:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.904 19:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.835 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.835 { 00:19:35.835 "cntlid": 129, 00:19:35.835 "qid": 0, 00:19:35.835 "state": "enabled", 00:19:35.835 "thread": "nvmf_tgt_poll_group_000", 00:19:35.835 "listen_address": { 00:19:35.835 "trtype": "TCP", 00:19:35.835 "adrfam": "IPv4", 00:19:35.835 "traddr": "10.0.0.2", 00:19:35.835 "trsvcid": "4420" 00:19:35.835 }, 00:19:35.835 "peer_address": { 00:19:35.835 "trtype": "TCP", 00:19:35.835 "adrfam": "IPv4", 00:19:35.835 "traddr": "10.0.0.1", 00:19:35.835 "trsvcid": "45248" 00:19:35.835 }, 00:19:35.835 "auth": { 00:19:35.835 "state": "completed", 00:19:35.835 "digest": "sha512", 00:19:35.835 "dhgroup": "ffdhe6144" 00:19:35.835 } 00:19:35.835 } 00:19:35.835 ]' 00:19:35.835 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.093 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.093 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.093 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.093 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.093 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.093 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.093 19:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.657 19:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.029 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.286 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:38.286 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.286 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.286 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.286 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.286 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.286 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.287 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.287 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.287 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.287 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.287 19:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.226 00:19:39.226 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.226 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.226 19:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.486 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.486 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.486 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.486 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.486 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.486 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.486 { 00:19:39.486 "cntlid": 131, 00:19:39.486 "qid": 0, 00:19:39.486 "state": "enabled", 00:19:39.486 "thread": "nvmf_tgt_poll_group_000", 00:19:39.486 "listen_address": { 00:19:39.486 "trtype": "TCP", 00:19:39.486 "adrfam": "IPv4", 00:19:39.486 "traddr": "10.0.0.2", 00:19:39.486 "trsvcid": "4420" 00:19:39.486 }, 00:19:39.486 "peer_address": { 00:19:39.486 "trtype": "TCP", 00:19:39.486 "adrfam": "IPv4", 00:19:39.486 "traddr": "10.0.0.1", 00:19:39.486 "trsvcid": "45270" 00:19:39.486 }, 00:19:39.486 "auth": { 00:19:39.486 "state": "completed", 00:19:39.486 "digest": "sha512", 00:19:39.486 "dhgroup": "ffdhe6144" 00:19:39.486 } 00:19:39.486 } 00:19:39.486 ]' 00:19:39.486 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.743 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.743 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.743 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.743 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.743 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.743 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.743 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.000 19:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.374 19:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.633 19:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.567 00:19:42.567 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.567 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.567 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.825 { 00:19:42.825 "cntlid": 133, 00:19:42.825 "qid": 0, 00:19:42.825 "state": "enabled", 00:19:42.825 "thread": "nvmf_tgt_poll_group_000", 00:19:42.825 "listen_address": { 00:19:42.825 "trtype": "TCP", 00:19:42.825 "adrfam": "IPv4", 00:19:42.825 "traddr": "10.0.0.2", 00:19:42.825 "trsvcid": "4420" 00:19:42.825 }, 00:19:42.825 "peer_address": { 00:19:42.825 "trtype": "TCP", 00:19:42.825 "adrfam": "IPv4", 00:19:42.825 "traddr": "10.0.0.1", 00:19:42.825 "trsvcid": "45310" 00:19:42.825 }, 00:19:42.825 "auth": { 00:19:42.825 "state": "completed", 00:19:42.825 "digest": "sha512", 00:19:42.825 "dhgroup": "ffdhe6144" 00:19:42.825 } 00:19:42.825 } 00:19:42.825 ]' 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.825 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.407 19:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.352 19:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.917 19:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:45.483 00:19:45.483 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.483 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.483 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.050 { 00:19:46.050 "cntlid": 135, 00:19:46.050 "qid": 0, 00:19:46.050 "state": "enabled", 00:19:46.050 "thread": "nvmf_tgt_poll_group_000", 00:19:46.050 "listen_address": { 00:19:46.050 "trtype": "TCP", 00:19:46.050 "adrfam": "IPv4", 00:19:46.050 "traddr": "10.0.0.2", 00:19:46.050 "trsvcid": "4420" 00:19:46.050 }, 00:19:46.050 "peer_address": { 00:19:46.050 "trtype": "TCP", 00:19:46.050 "adrfam": "IPv4", 00:19:46.050 "traddr": "10.0.0.1", 00:19:46.050 "trsvcid": "45042" 00:19:46.050 }, 00:19:46.050 "auth": { 00:19:46.050 "state": "completed", 00:19:46.050 "digest": "sha512", 00:19:46.050 "dhgroup": "ffdhe6144" 00:19:46.050 } 00:19:46.050 } 00:19:46.050 ]' 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:46.050 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.308 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.308 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.308 19:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.567 19:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.942 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.200 19:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.577 00:19:49.577 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.577 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.577 19:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.577 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.577 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.577 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.577 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.577 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.577 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.577 { 00:19:49.577 "cntlid": 137, 00:19:49.577 "qid": 0, 00:19:49.577 "state": "enabled", 00:19:49.577 "thread": "nvmf_tgt_poll_group_000", 00:19:49.577 "listen_address": { 00:19:49.577 "trtype": "TCP", 00:19:49.577 "adrfam": "IPv4", 00:19:49.577 "traddr": "10.0.0.2", 00:19:49.577 "trsvcid": "4420" 00:19:49.577 }, 00:19:49.577 "peer_address": { 00:19:49.577 "trtype": "TCP", 00:19:49.577 "adrfam": "IPv4", 00:19:49.577 "traddr": "10.0.0.1", 00:19:49.577 "trsvcid": "45076" 00:19:49.577 }, 00:19:49.577 "auth": { 00:19:49.577 "state": "completed", 00:19:49.577 "digest": "sha512", 00:19:49.577 "dhgroup": "ffdhe8192" 00:19:49.577 } 00:19:49.577 } 00:19:49.577 ]' 00:19:49.577 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.835 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.835 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.835 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.835 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.835 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.835 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.835 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.402 19:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.776 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.034 19:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.411 00:19:53.411 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.411 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.411 19:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.669 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.669 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.669 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.669 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.669 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.669 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.669 { 00:19:53.669 "cntlid": 139, 00:19:53.669 "qid": 0, 00:19:53.669 "state": "enabled", 00:19:53.669 "thread": "nvmf_tgt_poll_group_000", 00:19:53.669 "listen_address": { 00:19:53.669 "trtype": "TCP", 00:19:53.669 "adrfam": "IPv4", 00:19:53.669 "traddr": "10.0.0.2", 00:19:53.669 "trsvcid": "4420" 00:19:53.669 }, 00:19:53.669 "peer_address": { 00:19:53.669 "trtype": "TCP", 00:19:53.669 "adrfam": "IPv4", 00:19:53.669 "traddr": "10.0.0.1", 00:19:53.669 "trsvcid": "45100" 00:19:53.669 }, 00:19:53.669 "auth": { 00:19:53.669 "state": "completed", 00:19:53.669 "digest": "sha512", 00:19:53.669 "dhgroup": "ffdhe8192" 00:19:53.669 } 00:19:53.669 } 00:19:53.669 ]' 00:19:53.669 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.927 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.927 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.927 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:53.927 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.927 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.927 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.927 19:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.493 19:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTdlYjU5N2ZiMjY4NzY5Nzg1Yjc1ZjcyZmM0Y2U3YjYLMF1N: --dhchap-ctrl-secret DHHC-1:02:OTg0MTUwMTdhZTFkY2ZhNzZjYzVhMjVhODY4YTZiZDI1NTY2ZGMzOTM3YmE4NmQ5ui6H8A==: 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.866 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.124 19:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.497 00:19:57.497 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.497 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.497 19:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.497 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.497 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.497 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.497 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.497 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.497 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.497 { 00:19:57.497 "cntlid": 141, 00:19:57.497 "qid": 0, 00:19:57.497 "state": "enabled", 00:19:57.497 "thread": "nvmf_tgt_poll_group_000", 00:19:57.497 "listen_address": { 00:19:57.497 "trtype": "TCP", 00:19:57.497 "adrfam": "IPv4", 00:19:57.497 "traddr": "10.0.0.2", 00:19:57.497 "trsvcid": "4420" 00:19:57.497 }, 00:19:57.497 "peer_address": { 00:19:57.497 "trtype": "TCP", 00:19:57.497 "adrfam": "IPv4", 00:19:57.497 "traddr": "10.0.0.1", 00:19:57.497 "trsvcid": "59086" 00:19:57.497 }, 00:19:57.497 "auth": { 00:19:57.497 "state": "completed", 00:19:57.497 "digest": "sha512", 00:19:57.497 "dhgroup": "ffdhe8192" 00:19:57.497 } 00:19:57.497 } 00:19:57.497 ]' 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.755 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.322 19:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NDBjYTgzNDM4NjE0NDAzZjRkYzk5OTUwYzQ3MDZmZGMxZjdiMWQxN2UzZGZkMTJibBsBUA==: --dhchap-ctrl-secret DHHC-1:01:NzJiMGQwYWNhMjBmMTFiZWQ2ZDdmODgzYTRjYzdmYjWR4/lw: 00:19:59.291 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.550 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:59.550 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.550 19:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.550 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.550 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.550 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.550 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.808 19:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.182 00:20:01.182 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.182 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.182 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.182 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.182 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.182 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.182 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.440 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.440 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.440 { 00:20:01.440 "cntlid": 143, 00:20:01.440 "qid": 0, 00:20:01.440 "state": "enabled", 00:20:01.440 "thread": "nvmf_tgt_poll_group_000", 00:20:01.440 "listen_address": { 00:20:01.440 "trtype": "TCP", 00:20:01.440 "adrfam": "IPv4", 00:20:01.440 "traddr": "10.0.0.2", 00:20:01.440 "trsvcid": "4420" 00:20:01.440 }, 00:20:01.440 "peer_address": { 00:20:01.440 "trtype": "TCP", 00:20:01.440 "adrfam": "IPv4", 00:20:01.440 "traddr": "10.0.0.1", 00:20:01.440 "trsvcid": "59116" 00:20:01.440 }, 00:20:01.440 "auth": { 00:20:01.440 "state": "completed", 00:20:01.440 "digest": "sha512", 00:20:01.440 "dhgroup": "ffdhe8192" 00:20:01.440 } 00:20:01.440 } 00:20:01.440 ]' 00:20:01.440 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.440 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.440 19:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.440 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.440 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.440 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.440 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.440 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.007 19:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.382 19:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.382 19:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.756 00:20:04.756 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.756 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.756 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.322 { 00:20:05.322 "cntlid": 145, 00:20:05.322 "qid": 0, 00:20:05.322 "state": "enabled", 00:20:05.322 "thread": "nvmf_tgt_poll_group_000", 00:20:05.322 "listen_address": { 00:20:05.322 "trtype": "TCP", 00:20:05.322 "adrfam": "IPv4", 00:20:05.322 "traddr": "10.0.0.2", 00:20:05.322 "trsvcid": "4420" 00:20:05.322 }, 00:20:05.322 "peer_address": { 00:20:05.322 "trtype": "TCP", 00:20:05.322 "adrfam": "IPv4", 00:20:05.322 "traddr": "10.0.0.1", 00:20:05.322 "trsvcid": "41076" 00:20:05.322 }, 00:20:05.322 "auth": { 00:20:05.322 "state": "completed", 00:20:05.322 "digest": "sha512", 00:20:05.322 "dhgroup": "ffdhe8192" 00:20:05.322 } 00:20:05.322 } 00:20:05.322 ]' 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.322 19:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.580 19:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MGZjYzhhY2FlZWNjZGI2NjRmYjYxZDk5MzFmNWVkYTNlMmE3NmNhMmIzMGIyNzU2WScrvw==: --dhchap-ctrl-secret DHHC-1:03:NWIxZjUxYjhlZmIwOTAxNDBmZDMzNzAwOTcxZTMwZjQwMzM3MzVjYWVjOGJjZTY3ZTk4NDdlNmE3MGU5ZGM0Off6k6M=: 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.953 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:06.954 19:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:08.329 request: 00:20:08.329 { 00:20:08.329 "name": "nvme0", 00:20:08.329 "trtype": "tcp", 00:20:08.329 "traddr": "10.0.0.2", 00:20:08.329 "adrfam": "ipv4", 00:20:08.329 "trsvcid": "4420", 00:20:08.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:08.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:08.329 "prchk_reftag": false, 00:20:08.329 "prchk_guard": false, 00:20:08.329 "hdgst": false, 00:20:08.329 "ddgst": false, 00:20:08.329 "dhchap_key": "key2", 00:20:08.329 "method": "bdev_nvme_attach_controller", 00:20:08.329 "req_id": 1 00:20:08.329 } 00:20:08.329 Got JSON-RPC error response 00:20:08.329 response: 00:20:08.329 { 00:20:08.329 "code": -5, 00:20:08.329 "message": "Input/output error" 00:20:08.329 } 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:08.329 19:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.264 request: 00:20:09.264 { 00:20:09.264 "name": "nvme0", 00:20:09.264 "trtype": "tcp", 00:20:09.264 "traddr": "10.0.0.2", 00:20:09.264 "adrfam": "ipv4", 00:20:09.264 "trsvcid": "4420", 00:20:09.264 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:09.264 "prchk_reftag": false, 00:20:09.264 "prchk_guard": false, 00:20:09.264 "hdgst": false, 00:20:09.264 "ddgst": false, 00:20:09.264 "dhchap_key": "key1", 00:20:09.264 "dhchap_ctrlr_key": "ckey2", 00:20:09.264 "method": "bdev_nvme_attach_controller", 00:20:09.264 "req_id": 1 00:20:09.264 } 00:20:09.264 Got JSON-RPC error response 00:20:09.264 response: 00:20:09.264 { 00:20:09.264 "code": -5, 00:20:09.264 "message": "Input/output error" 00:20:09.264 } 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.264 19:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.199 request: 00:20:10.199 { 00:20:10.199 "name": "nvme0", 00:20:10.199 "trtype": "tcp", 00:20:10.199 "traddr": "10.0.0.2", 00:20:10.199 "adrfam": "ipv4", 00:20:10.199 "trsvcid": "4420", 00:20:10.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:10.199 "prchk_reftag": false, 00:20:10.199 "prchk_guard": false, 00:20:10.199 "hdgst": false, 00:20:10.199 "ddgst": false, 00:20:10.199 "dhchap_key": "key1", 00:20:10.199 "dhchap_ctrlr_key": "ckey1", 00:20:10.199 "method": "bdev_nvme_attach_controller", 00:20:10.199 "req_id": 1 00:20:10.199 } 00:20:10.199 Got JSON-RPC error response 00:20:10.199 response: 00:20:10.199 { 00:20:10.199 "code": -5, 00:20:10.199 "message": "Input/output error" 00:20:10.199 } 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1644627 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1644627 ']' 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1644627 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1644627 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1644627' 00:20:10.199 killing process with pid 1644627 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1644627 00:20:10.199 19:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1644627 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1675407 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1675407 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1675407 ']' 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.457 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.458 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.458 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1675407 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1675407 ']' 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.023 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.024 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.024 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.024 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.282 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.283 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.283 19:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.654 00:20:12.654 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.654 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.654 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.912 { 00:20:12.912 "cntlid": 1, 00:20:12.912 "qid": 0, 00:20:12.912 "state": "enabled", 00:20:12.912 "thread": "nvmf_tgt_poll_group_000", 00:20:12.912 "listen_address": { 00:20:12.912 "trtype": "TCP", 00:20:12.912 "adrfam": "IPv4", 00:20:12.912 "traddr": "10.0.0.2", 00:20:12.912 "trsvcid": "4420" 00:20:12.912 }, 00:20:12.912 "peer_address": { 00:20:12.912 "trtype": "TCP", 00:20:12.912 "adrfam": "IPv4", 00:20:12.912 "traddr": "10.0.0.1", 00:20:12.912 "trsvcid": "41140" 00:20:12.912 }, 00:20:12.912 "auth": { 00:20:12.912 "state": "completed", 00:20:12.912 "digest": "sha512", 00:20:12.912 "dhgroup": "ffdhe8192" 00:20:12.912 } 00:20:12.912 } 00:20:12.912 ]' 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.912 19:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.480 19:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:MmM4YWE1MjhlZTRlMDc0ZGQ3ZDZhNjkwNDExMDk1YjA2ZTUyZDE1OTA2NWMyYmYxMDBhNGE4NGEyOTk2ZmI2MtPgSXw=: 00:20:14.884 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.884 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:14.884 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.884 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.884 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.884 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:14.884 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.885 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.885 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.885 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:14.885 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.143 19:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.401 request: 00:20:15.401 { 00:20:15.401 "name": "nvme0", 00:20:15.401 "trtype": "tcp", 00:20:15.401 "traddr": "10.0.0.2", 00:20:15.401 "adrfam": "ipv4", 00:20:15.401 "trsvcid": "4420", 00:20:15.401 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:15.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:15.401 "prchk_reftag": false, 00:20:15.401 "prchk_guard": false, 00:20:15.401 "hdgst": false, 00:20:15.401 "ddgst": false, 00:20:15.401 "dhchap_key": "key3", 00:20:15.401 "method": "bdev_nvme_attach_controller", 00:20:15.401 "req_id": 1 00:20:15.401 } 00:20:15.401 Got JSON-RPC error response 00:20:15.401 response: 00:20:15.401 { 00:20:15.401 "code": -5, 00:20:15.401 "message": "Input/output error" 00:20:15.401 } 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:15.401 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.967 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.225 request: 00:20:16.225 { 00:20:16.225 "name": "nvme0", 00:20:16.225 "trtype": "tcp", 00:20:16.225 "traddr": "10.0.0.2", 00:20:16.225 "adrfam": "ipv4", 00:20:16.225 "trsvcid": "4420", 00:20:16.225 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:16.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:16.225 "prchk_reftag": false, 00:20:16.225 "prchk_guard": false, 00:20:16.225 "hdgst": false, 00:20:16.225 "ddgst": false, 00:20:16.225 "dhchap_key": "key3", 00:20:16.225 "method": "bdev_nvme_attach_controller", 00:20:16.225 "req_id": 1 00:20:16.225 } 00:20:16.225 Got JSON-RPC error response 00:20:16.225 response: 00:20:16.225 { 00:20:16.225 "code": -5, 00:20:16.225 "message": "Input/output error" 00:20:16.225 } 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.225 19:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:16.791 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:16.792 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:16.792 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.792 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:16.792 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.792 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:16.792 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:17.050 request: 00:20:17.050 { 00:20:17.050 "name": "nvme0", 00:20:17.050 "trtype": "tcp", 00:20:17.050 "traddr": "10.0.0.2", 00:20:17.050 "adrfam": "ipv4", 00:20:17.050 "trsvcid": "4420", 00:20:17.050 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:17.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:17.050 "prchk_reftag": false, 00:20:17.050 "prchk_guard": false, 00:20:17.050 "hdgst": false, 00:20:17.050 "ddgst": false, 00:20:17.050 "dhchap_key": "key0", 00:20:17.050 "dhchap_ctrlr_key": "key1", 00:20:17.050 "method": "bdev_nvme_attach_controller", 00:20:17.050 "req_id": 1 00:20:17.050 } 00:20:17.050 Got JSON-RPC error response 00:20:17.050 response: 00:20:17.050 { 00:20:17.050 "code": -5, 00:20:17.050 "message": "Input/output error" 00:20:17.050 } 00:20:17.050 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:17.050 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:17.050 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:17.050 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:17.050 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:17.050 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:17.308 00:20:17.308 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:17.308 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:17.308 19:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.566 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.566 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.567 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1644787 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1644787 ']' 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1644787 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1644787 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1644787' 00:20:18.134 killing process with pid 1644787 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1644787 00:20:18.134 19:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1644787 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:18.701 rmmod nvme_tcp 00:20:18.701 rmmod nvme_fabrics 00:20:18.701 rmmod nvme_keyring 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1675407 ']' 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1675407 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1675407 ']' 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1675407 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1675407 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1675407' 00:20:18.701 killing process with pid 1675407 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1675407 00:20:18.701 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1675407 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.270 19:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.M0I /tmp/spdk.key-sha256.W1Y /tmp/spdk.key-sha384.Y1w /tmp/spdk.key-sha512.JwN /tmp/spdk.key-sha512.XzI /tmp/spdk.key-sha384.FgD /tmp/spdk.key-sha256.nWM '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:21.176 00:20:21.176 real 4m25.691s 00:20:21.176 user 10m30.513s 00:20:21.176 sys 0m34.206s 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.176 ************************************ 00:20:21.176 END TEST nvmf_auth_target 00:20:21.176 ************************************ 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.176 ************************************ 00:20:21.176 START TEST nvmf_bdevio_no_huge 00:20:21.176 ************************************ 00:20:21.176 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:21.176 * Looking for test storage... 00:20:21.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.435 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:21.436 19:12:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:24.725 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.725 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:24.726 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:24.726 Found net devices under 0000:84:00.0: cvl_0_0 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:24.726 Found net devices under 0000:84:00.1: cvl_0_1 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:20:24.726 00:20:24.726 --- 10.0.0.2 ping statistics --- 00:20:24.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.726 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:20:24.726 00:20:24.726 --- 10.0.0.1 ping statistics --- 00:20:24.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.726 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1678369 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1678369 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1678369 ']' 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:24.726 19:12:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:24.726 [2024-07-24 19:12:30.005454] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:20:24.726 [2024-07-24 19:12:30.005601] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:24.726 [2024-07-24 19:12:30.168393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.985 [2024-07-24 19:12:30.427028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.985 [2024-07-24 19:12:30.427132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.985 [2024-07-24 19:12:30.427167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.985 [2024-07-24 19:12:30.427197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.985 [2024-07-24 19:12:30.427222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.985 [2024-07-24 19:12:30.427583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:24.985 [2024-07-24 19:12:30.428132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:24.985 [2024-07-24 19:12:30.428240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.985 [2024-07-24 19:12:30.428231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:25.551 [2024-07-24 19:12:31.140181] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:25.551 Malloc0 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:25.551 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:25.552 [2024-07-24 19:12:31.180744] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:25.552 { 00:20:25.552 "params": { 00:20:25.552 "name": "Nvme$subsystem", 00:20:25.552 "trtype": "$TEST_TRANSPORT", 00:20:25.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:25.552 "adrfam": "ipv4", 00:20:25.552 "trsvcid": "$NVMF_PORT", 00:20:25.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:25.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:25.552 "hdgst": ${hdgst:-false}, 00:20:25.552 "ddgst": ${ddgst:-false} 00:20:25.552 }, 00:20:25.552 "method": "bdev_nvme_attach_controller" 00:20:25.552 } 00:20:25.552 EOF 00:20:25.552 )") 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:25.552 19:12:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:25.552 "params": { 00:20:25.552 "name": "Nvme1", 00:20:25.552 "trtype": "tcp", 00:20:25.552 "traddr": "10.0.0.2", 00:20:25.552 "adrfam": "ipv4", 00:20:25.552 "trsvcid": "4420", 00:20:25.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.552 "hdgst": false, 00:20:25.552 "ddgst": false 00:20:25.552 }, 00:20:25.552 "method": "bdev_nvme_attach_controller" 00:20:25.552 }' 00:20:25.552 [2024-07-24 19:12:31.231837] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:20:25.552 [2024-07-24 19:12:31.231942] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1678620 ] 00:20:25.810 [2024-07-24 19:12:31.318573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:25.810 [2024-07-24 19:12:31.463988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.810 [2024-07-24 19:12:31.464048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.810 [2024-07-24 19:12:31.464053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.069 I/O targets: 00:20:26.069 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:26.069 00:20:26.069 00:20:26.069 CUnit - A unit testing framework for C - Version 2.1-3 00:20:26.069 http://cunit.sourceforge.net/ 00:20:26.069 00:20:26.069 00:20:26.069 Suite: bdevio tests on: Nvme1n1 00:20:26.069 Test: blockdev write read block ...passed 00:20:26.069 Test: blockdev write zeroes read block ...passed 00:20:26.069 Test: blockdev write zeroes read no split ...passed 00:20:26.327 Test: blockdev write zeroes read split ...passed 00:20:26.327 Test: blockdev write zeroes read split partial ...passed 00:20:26.327 Test: blockdev reset ...[2024-07-24 19:12:31.825401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:26.327 [2024-07-24 19:12:31.825530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9a670 (9): Bad file descriptor 00:20:26.327 [2024-07-24 19:12:31.885066] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:26.327 passed 00:20:26.327 Test: blockdev write read 8 blocks ...passed 00:20:26.327 Test: blockdev write read size > 128k ...passed 00:20:26.327 Test: blockdev write read invalid size ...passed 00:20:26.327 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:26.327 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:26.327 Test: blockdev write read max offset ...passed 00:20:26.327 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:26.593 Test: blockdev writev readv 8 blocks ...passed 00:20:26.593 Test: blockdev writev readv 30 x 1block ...passed 00:20:26.593 Test: blockdev writev readv block ...passed 00:20:26.593 Test: blockdev writev readv size > 128k ...passed 00:20:26.593 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:26.593 Test: blockdev comparev and writev ...[2024-07-24 19:12:32.146186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.593 [2024-07-24 19:12:32.146231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.593 [2024-07-24 19:12:32.146264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.593 [2024-07-24 19:12:32.146286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.593 [2024-07-24 19:12:32.146876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.593 [2024-07-24 19:12:32.146909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.593 [2024-07-24 19:12:32.146937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.593 [2024-07-24 19:12:32.146959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.593 [2024-07-24 19:12:32.147541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.593 [2024-07-24 19:12:32.147573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.593 [2024-07-24 19:12:32.147601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.594 [2024-07-24 19:12:32.147623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.594 [2024-07-24 19:12:32.148191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.594 [2024-07-24 19:12:32.148222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.594 [2024-07-24 19:12:32.148251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:26.594 [2024-07-24 19:12:32.148279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.594 passed 00:20:26.594 Test: blockdev nvme passthru rw ...passed 00:20:26.594 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:12:32.230807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.594 [2024-07-24 19:12:32.230845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.594 [2024-07-24 19:12:32.231113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.594 [2024-07-24 19:12:32.231143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.594 [2024-07-24 19:12:32.231396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.594 [2024-07-24 19:12:32.231426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.594 [2024-07-24 19:12:32.231657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.594 [2024-07-24 19:12:32.231697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.594 passed 00:20:26.594 Test: blockdev nvme admin passthru ...passed 00:20:26.594 Test: blockdev copy ...passed 00:20:26.594 00:20:26.594 Run Summary: Type Total Ran Passed Failed Inactive 00:20:26.594 suites 1 1 n/a 0 0 00:20:26.594 tests 23 23 23 0 0 00:20:26.594 asserts 152 152 152 0 n/a 00:20:26.594 00:20:26.594 Elapsed time = 1.280 seconds 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.160 rmmod nvme_tcp 00:20:27.160 rmmod nvme_fabrics 00:20:27.160 rmmod nvme_keyring 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1678369 ']' 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1678369 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1678369 ']' 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1678369 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1678369 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1678369' 00:20:27.160 killing process with pid 1678369 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1678369 00:20:27.160 19:12:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1678369 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.099 19:12:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.026 00:20:30.026 real 0m8.763s 00:20:30.026 user 0m14.841s 00:20:30.026 sys 0m3.751s 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:30.026 ************************************ 00:20:30.026 END TEST nvmf_bdevio_no_huge 00:20:30.026 ************************************ 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.026 ************************************ 00:20:30.026 START TEST nvmf_tls 00:20:30.026 ************************************ 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:30.026 * Looking for test storage... 00:20:30.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.026 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.291 19:12:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.825 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:32.826 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:32.826 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:32.826 Found net devices under 0000:84:00.0: cvl_0_0 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:32.826 Found net devices under 0000:84:00.1: cvl_0_1 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.826 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:20:33.085 00:20:33.085 --- 10.0.0.2 ping statistics --- 00:20:33.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.085 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:20:33.085 00:20:33.085 --- 10.0.0.1 ping statistics --- 00:20:33.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.085 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1680848 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1680848 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1680848 ']' 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.085 19:12:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.085 [2024-07-24 19:12:38.627965] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:20:33.085 [2024-07-24 19:12:38.628063] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.085 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.085 [2024-07-24 19:12:38.725233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.344 [2024-07-24 19:12:38.864374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.344 [2024-07-24 19:12:38.864449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.344 [2024-07-24 19:12:38.864471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.344 [2024-07-24 19:12:38.864492] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.344 [2024-07-24 19:12:38.864506] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.344 [2024-07-24 19:12:38.864549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:34.278 19:12:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:34.536 true 00:20:34.536 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.536 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:35.103 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:35.103 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:35.103 19:12:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:35.669 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.669 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:35.927 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:35.927 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:35.927 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:36.493 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.493 19:12:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:36.751 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:36.751 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:36.751 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.751 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:37.317 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:37.317 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:37.317 19:12:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:37.575 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:37.575 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:38.146 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:38.146 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:38.146 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:38.403 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:38.403 19:12:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:38.660 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.ysHQ3YSEE0 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.wAydRVDiOo 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.ysHQ3YSEE0 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.wAydRVDiOo 00:20:38.916 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:39.172 19:12:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:39.429 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.ysHQ3YSEE0 00:20:39.429 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ysHQ3YSEE0 00:20:39.429 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.686 [2024-07-24 19:12:45.365918] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.942 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.198 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:40.455 [2024-07-24 19:12:45.963535] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:40.455 [2024-07-24 19:12:45.963818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.455 19:12:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.712 malloc0 00:20:40.712 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.970 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ysHQ3YSEE0 00:20:41.536 [2024-07-24 19:12:46.927854] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:41.536 19:12:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ysHQ3YSEE0 00:20:41.536 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.531 Initializing NVMe Controllers 00:20:51.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:51.531 Initialization complete. Launching workers. 00:20:51.531 ======================================================== 00:20:51.531 Latency(us) 00:20:51.531 Device Information : IOPS MiB/s Average min max 00:20:51.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5890.70 23.01 10869.27 1410.64 11829.50 00:20:51.531 ======================================================== 00:20:51.531 Total : 5890.70 23.01 10869.27 1410.64 11829.50 00:20:51.531 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ysHQ3YSEE0 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ysHQ3YSEE0' 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1683001 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1683001 /var/tmp/bdevperf.sock 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1683001 ']' 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.531 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.531 [2024-07-24 19:12:57.154284] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:20:51.531 [2024-07-24 19:12:57.154391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683001 ] 00:20:51.531 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.792 [2024-07-24 19:12:57.238787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.792 [2024-07-24 19:12:57.379631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.050 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:52.050 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:52.050 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ysHQ3YSEE0 00:20:52.308 [2024-07-24 19:12:57.905233] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.308 [2024-07-24 19:12:57.905371] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:52.308 TLSTESTn1 00:20:52.308 19:12:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:52.566 Running I/O for 10 seconds... 00:21:02.534 00:21:02.534 Latency(us) 00:21:02.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.534 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:02.535 Verification LBA range: start 0x0 length 0x2000 00:21:02.535 TLSTESTn1 : 10.03 2616.04 10.22 0.00 0.00 48814.90 8301.23 41166.32 00:21:02.535 =================================================================================================================== 00:21:02.535 Total : 2616.04 10.22 0.00 0.00 48814.90 8301.23 41166.32 00:21:02.535 0 00:21:02.535 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.535 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1683001 00:21:02.535 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1683001 ']' 00:21:02.535 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1683001 00:21:02.535 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:02.535 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.535 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1683001 00:21:02.793 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:02.793 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:02.793 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1683001' 00:21:02.793 killing process with pid 1683001 00:21:02.793 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1683001 00:21:02.793 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.793 00:21:02.793 Latency(us) 00:21:02.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.793 =================================================================================================================== 00:21:02.793 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:02.793 [2024-07-24 19:13:08.264154] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.793 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1683001 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wAydRVDiOo 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wAydRVDiOo 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wAydRVDiOo 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wAydRVDiOo' 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1684319 00:21:03.051 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1684319 /var/tmp/bdevperf.sock 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1684319 ']' 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.052 19:13:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.052 [2024-07-24 19:13:08.694564] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:03.052 [2024-07-24 19:13:08.694744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684319 ] 00:21:03.310 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.310 [2024-07-24 19:13:08.793220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.310 [2024-07-24 19:13:08.931865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.568 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.568 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:03.568 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wAydRVDiOo 00:21:03.827 [2024-07-24 19:13:09.387009] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.827 [2024-07-24 19:13:09.387157] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:03.827 [2024-07-24 19:13:09.394991] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.827 [2024-07-24 19:13:09.395939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3d6d0 (107): Transport endpoint is not connected 00:21:03.827 [2024-07-24 19:13:09.396925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3d6d0 (9): Bad file descriptor 00:21:03.827 [2024-07-24 19:13:09.397922] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.827 [2024-07-24 19:13:09.397948] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.827 [2024-07-24 19:13:09.397972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.827 request: 00:21:03.827 { 00:21:03.827 "name": "TLSTEST", 00:21:03.827 "trtype": "tcp", 00:21:03.827 "traddr": "10.0.0.2", 00:21:03.827 "adrfam": "ipv4", 00:21:03.827 "trsvcid": "4420", 00:21:03.827 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.827 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.827 "prchk_reftag": false, 00:21:03.827 "prchk_guard": false, 00:21:03.827 "hdgst": false, 00:21:03.827 "ddgst": false, 00:21:03.827 "psk": "/tmp/tmp.wAydRVDiOo", 00:21:03.827 "method": "bdev_nvme_attach_controller", 00:21:03.827 "req_id": 1 00:21:03.827 } 00:21:03.827 Got JSON-RPC error response 00:21:03.827 response: 00:21:03.827 { 00:21:03.827 "code": -5, 00:21:03.827 "message": "Input/output error" 00:21:03.827 } 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1684319 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1684319 ']' 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1684319 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684319 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684319' 00:21:03.827 killing process with pid 1684319 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1684319 00:21:03.827 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.827 00:21:03.827 Latency(us) 00:21:03.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.827 =================================================================================================================== 00:21:03.827 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.827 [2024-07-24 19:13:09.468337] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:03.827 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1684319 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ysHQ3YSEE0 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ysHQ3YSEE0 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ysHQ3YSEE0 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ysHQ3YSEE0' 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1684460 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1684460 /var/tmp/bdevperf.sock 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1684460 ']' 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:04.394 19:13:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.394 [2024-07-24 19:13:09.835949] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:04.394 [2024-07-24 19:13:09.836032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684460 ] 00:21:04.394 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.394 [2024-07-24 19:13:09.905506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.394 [2024-07-24 19:13:10.048254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.328 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:05.328 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:05.328 19:13:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.ysHQ3YSEE0 00:21:05.587 [2024-07-24 19:13:11.258675] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.587 [2024-07-24 19:13:11.258826] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:05.587 [2024-07-24 19:13:11.269754] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:05.587 [2024-07-24 19:13:11.269797] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:05.587 [2024-07-24 19:13:11.269849] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:05.587 [2024-07-24 19:13:11.270566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25056d0 (107): Transport endpoint is not connected 00:21:05.587 [2024-07-24 19:13:11.271556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25056d0 (9): Bad file descriptor 00:21:05.587 [2024-07-24 19:13:11.272553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:05.587 [2024-07-24 19:13:11.272579] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:05.587 [2024-07-24 19:13:11.272605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:05.587 request: 00:21:05.587 { 00:21:05.587 "name": "TLSTEST", 00:21:05.587 "trtype": "tcp", 00:21:05.587 "traddr": "10.0.0.2", 00:21:05.587 "adrfam": "ipv4", 00:21:05.587 "trsvcid": "4420", 00:21:05.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.587 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:05.587 "prchk_reftag": false, 00:21:05.587 "prchk_guard": false, 00:21:05.587 "hdgst": false, 00:21:05.587 "ddgst": false, 00:21:05.587 "psk": "/tmp/tmp.ysHQ3YSEE0", 00:21:05.587 "method": "bdev_nvme_attach_controller", 00:21:05.587 "req_id": 1 00:21:05.587 } 00:21:05.587 Got JSON-RPC error response 00:21:05.587 response: 00:21:05.587 { 00:21:05.587 "code": -5, 00:21:05.587 "message": "Input/output error" 00:21:05.587 } 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1684460 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1684460 ']' 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1684460 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684460 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684460' 00:21:05.846 killing process with pid 1684460 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1684460 00:21:05.846 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.846 00:21:05.846 Latency(us) 00:21:05.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.846 =================================================================================================================== 00:21:05.846 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:05.846 [2024-07-24 19:13:11.320818] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:05.846 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1684460 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ysHQ3YSEE0 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ysHQ3YSEE0 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ysHQ3YSEE0 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ysHQ3YSEE0' 00:21:06.105 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1684614 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1684614 /var/tmp/bdevperf.sock 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1684614 ']' 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:06.106 19:13:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.106 [2024-07-24 19:13:11.692482] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:06.106 [2024-07-24 19:13:11.692577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684614 ] 00:21:06.106 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.106 [2024-07-24 19:13:11.796532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.364 [2024-07-24 19:13:11.939395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.622 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:06.622 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:06.622 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ysHQ3YSEE0 00:21:06.881 [2024-07-24 19:13:12.379652] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.881 [2024-07-24 19:13:12.379815] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.881 [2024-07-24 19:13:12.392896] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:06.881 [2024-07-24 19:13:12.392939] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:06.881 [2024-07-24 19:13:12.392995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:06.881 [2024-07-24 19:13:12.393905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba6d0 (107): Transport endpoint is not connected 00:21:06.881 [2024-07-24 19:13:12.394891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ba6d0 (9): Bad file descriptor 00:21:06.881 [2024-07-24 19:13:12.395894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:06.881 [2024-07-24 19:13:12.395921] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:06.881 [2024-07-24 19:13:12.395944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:06.881 request: 00:21:06.881 { 00:21:06.881 "name": "TLSTEST", 00:21:06.881 "trtype": "tcp", 00:21:06.881 "traddr": "10.0.0.2", 00:21:06.881 "adrfam": "ipv4", 00:21:06.881 "trsvcid": "4420", 00:21:06.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:06.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.881 "prchk_reftag": false, 00:21:06.881 "prchk_guard": false, 00:21:06.881 "hdgst": false, 00:21:06.881 "ddgst": false, 00:21:06.881 "psk": "/tmp/tmp.ysHQ3YSEE0", 00:21:06.881 "method": "bdev_nvme_attach_controller", 00:21:06.881 "req_id": 1 00:21:06.881 } 00:21:06.881 Got JSON-RPC error response 00:21:06.881 response: 00:21:06.881 { 00:21:06.881 "code": -5, 00:21:06.881 "message": "Input/output error" 00:21:06.881 } 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1684614 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1684614 ']' 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1684614 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684614 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684614' 00:21:06.881 killing process with pid 1684614 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1684614 00:21:06.881 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.881 00:21:06.881 Latency(us) 00:21:06.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.881 =================================================================================================================== 00:21:06.881 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:06.881 [2024-07-24 19:13:12.476571] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:06.881 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1684614 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:07.139 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1684756 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1684756 /var/tmp/bdevperf.sock 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1684756 ']' 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.140 19:13:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.398 [2024-07-24 19:13:12.853212] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:07.398 [2024-07-24 19:13:12.853303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684756 ] 00:21:07.398 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.398 [2024-07-24 19:13:12.928579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.398 [2024-07-24 19:13:13.067420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.656 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.656 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:07.656 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:07.928 [2024-07-24 19:13:13.545963] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:07.928 [2024-07-24 19:13:13.548095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2455e10 (9): Bad file descriptor 00:21:07.928 [2024-07-24 19:13:13.549088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:07.928 [2024-07-24 19:13:13.549116] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:07.928 [2024-07-24 19:13:13.549141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.928 request: 00:21:07.928 { 00:21:07.928 "name": "TLSTEST", 00:21:07.928 "trtype": "tcp", 00:21:07.928 "traddr": "10.0.0.2", 00:21:07.928 "adrfam": "ipv4", 00:21:07.928 "trsvcid": "4420", 00:21:07.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.928 "prchk_reftag": false, 00:21:07.928 "prchk_guard": false, 00:21:07.928 "hdgst": false, 00:21:07.928 "ddgst": false, 00:21:07.928 "method": "bdev_nvme_attach_controller", 00:21:07.928 "req_id": 1 00:21:07.928 } 00:21:07.928 Got JSON-RPC error response 00:21:07.928 response: 00:21:07.928 { 00:21:07.928 "code": -5, 00:21:07.928 "message": "Input/output error" 00:21:07.928 } 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1684756 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1684756 ']' 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1684756 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684756 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684756' 00:21:07.928 killing process with pid 1684756 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1684756 00:21:07.928 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.928 00:21:07.928 Latency(us) 00:21:07.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.928 =================================================================================================================== 00:21:07.928 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:07.928 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1684756 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1680848 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1680848 ']' 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1680848 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1680848 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1680848' 00:21:08.494 killing process with pid 1680848 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1680848 00:21:08.494 [2024-07-24 19:13:13.989722] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:08.494 19:13:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1680848 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.x35R8oNgyX 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.x35R8oNgyX 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1685010 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1685010 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1685010 ']' 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.754 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.013 [2024-07-24 19:13:14.498133] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:09.013 [2024-07-24 19:13:14.498301] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.013 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.013 [2024-07-24 19:13:14.592891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.272 [2024-07-24 19:13:14.735831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.272 [2024-07-24 19:13:14.735895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.272 [2024-07-24 19:13:14.735915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.272 [2024-07-24 19:13:14.735930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.272 [2024-07-24 19:13:14.735944] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.272 [2024-07-24 19:13:14.735979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.x35R8oNgyX 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x35R8oNgyX 00:21:09.272 19:13:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:09.530 [2024-07-24 19:13:15.226315] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.788 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:10.045 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.304 [2024-07-24 19:13:15.836022] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.304 [2024-07-24 19:13:15.836345] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.304 19:13:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:10.564 malloc0 00:21:10.564 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:11.168 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x35R8oNgyX 00:21:11.427 [2024-07-24 19:13:16.897470] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x35R8oNgyX 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x35R8oNgyX' 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1685313 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1685313 /var/tmp/bdevperf.sock 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1685313 ']' 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:11.427 19:13:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.427 [2024-07-24 19:13:16.969552] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:11.427 [2024-07-24 19:13:16.969637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685313 ] 00:21:11.427 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.427 [2024-07-24 19:13:17.046640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.686 [2024-07-24 19:13:17.191394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.943 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.943 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:11.943 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x35R8oNgyX 00:21:12.202 [2024-07-24 19:13:17.713291] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.202 [2024-07-24 19:13:17.713424] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:12.202 TLSTESTn1 00:21:12.202 19:13:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:12.460 Running I/O for 10 seconds... 00:21:22.427 00:21:22.427 Latency(us) 00:21:22.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.427 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:22.427 Verification LBA range: start 0x0 length 0x2000 00:21:22.427 TLSTESTn1 : 10.02 2607.04 10.18 0.00 0.00 48990.77 11311.03 64856.37 00:21:22.427 =================================================================================================================== 00:21:22.427 Total : 2607.04 10.18 0.00 0.00 48990.77 11311.03 64856.37 00:21:22.427 0 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1685313 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1685313 ']' 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1685313 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685313 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685313' 00:21:22.427 killing process with pid 1685313 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1685313 00:21:22.427 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.427 00:21:22.427 Latency(us) 00:21:22.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.427 =================================================================================================================== 00:21:22.427 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.427 [2024-07-24 19:13:28.118377] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:22.427 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1685313 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.x35R8oNgyX 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x35R8oNgyX 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x35R8oNgyX 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.x35R8oNgyX 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.x35R8oNgyX' 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1686628 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1686628 /var/tmp/bdevperf.sock 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1686628 ']' 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.994 19:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.994 [2024-07-24 19:13:28.537318] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:22.994 [2024-07-24 19:13:28.537500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686628 ] 00:21:22.994 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.994 [2024-07-24 19:13:28.649884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.253 [2024-07-24 19:13:28.794860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.187 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.187 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:24.187 19:13:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x35R8oNgyX 00:21:24.753 [2024-07-24 19:13:30.168797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.753 [2024-07-24 19:13:30.168887] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:24.753 [2024-07-24 19:13:30.168908] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.x35R8oNgyX 00:21:24.753 request: 00:21:24.753 { 00:21:24.753 "name": "TLSTEST", 00:21:24.753 "trtype": "tcp", 00:21:24.753 "traddr": "10.0.0.2", 00:21:24.753 "adrfam": "ipv4", 00:21:24.753 "trsvcid": "4420", 00:21:24.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.753 "prchk_reftag": false, 00:21:24.753 "prchk_guard": false, 00:21:24.753 "hdgst": false, 00:21:24.753 "ddgst": false, 00:21:24.753 "psk": "/tmp/tmp.x35R8oNgyX", 00:21:24.753 "method": "bdev_nvme_attach_controller", 00:21:24.753 "req_id": 1 00:21:24.753 } 00:21:24.753 Got JSON-RPC error response 00:21:24.753 response: 00:21:24.753 { 00:21:24.753 "code": -1, 00:21:24.753 "message": "Operation not permitted" 00:21:24.753 } 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1686628 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1686628 ']' 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1686628 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686628 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686628' 00:21:24.753 killing process with pid 1686628 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1686628 00:21:24.753 Received shutdown signal, test time was about 10.000000 seconds 00:21:24.753 00:21:24.753 Latency(us) 00:21:24.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.753 =================================================================================================================== 00:21:24.753 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:24.753 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1686628 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1685010 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1685010 ']' 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1685010 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685010 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685010' 00:21:25.012 killing process with pid 1685010 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1685010 00:21:25.012 [2024-07-24 19:13:30.562305] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:25.012 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1685010 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1686909 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1686909 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1686909 ']' 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.271 19:13:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.530 [2024-07-24 19:13:31.002135] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:25.530 [2024-07-24 19:13:31.002316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.530 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.530 [2024-07-24 19:13:31.132398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.788 [2024-07-24 19:13:31.274618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.788 [2024-07-24 19:13:31.274691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.788 [2024-07-24 19:13:31.274721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.788 [2024-07-24 19:13:31.274738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.788 [2024-07-24 19:13:31.274752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.788 [2024-07-24 19:13:31.274789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.788 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:25.788 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:25.788 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.788 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.788 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:25.788 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.x35R8oNgyX 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.x35R8oNgyX 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.x35R8oNgyX 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x35R8oNgyX 00:21:25.789 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:26.355 [2024-07-24 19:13:31.873523] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.355 19:13:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:26.613 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:26.871 [2024-07-24 19:13:32.483207] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:26.871 [2024-07-24 19:13:32.483517] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.871 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:27.129 malloc0 00:21:27.129 19:13:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:27.694 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x35R8oNgyX 00:21:27.951 [2024-07-24 19:13:33.399638] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:27.951 [2024-07-24 19:13:33.399687] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:27.951 [2024-07-24 19:13:33.399732] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:27.951 request: 00:21:27.951 { 00:21:27.951 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.951 "host": "nqn.2016-06.io.spdk:host1", 00:21:27.951 "psk": "/tmp/tmp.x35R8oNgyX", 00:21:27.951 "method": "nvmf_subsystem_add_host", 00:21:27.951 "req_id": 1 00:21:27.951 } 00:21:27.951 Got JSON-RPC error response 00:21:27.951 response: 00:21:27.951 { 00:21:27.951 "code": -32603, 00:21:27.951 "message": "Internal error" 00:21:27.951 } 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1686909 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1686909 ']' 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1686909 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686909 00:21:27.951 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:27.952 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:27.952 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686909' 00:21:27.952 killing process with pid 1686909 00:21:27.952 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1686909 00:21:27.952 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1686909 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.x35R8oNgyX 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1687212 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1687212 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1687212 ']' 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.210 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.211 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.211 19:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.469 [2024-07-24 19:13:33.917007] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:28.469 [2024-07-24 19:13:33.917184] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.469 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.469 [2024-07-24 19:13:34.038899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.728 [2024-07-24 19:13:34.178663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.728 [2024-07-24 19:13:34.178724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.728 [2024-07-24 19:13:34.178755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.728 [2024-07-24 19:13:34.178773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.728 [2024-07-24 19:13:34.178788] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.728 [2024-07-24 19:13:34.178824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.x35R8oNgyX 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x35R8oNgyX 00:21:28.728 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:28.986 [2024-07-24 19:13:34.665347] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.243 19:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:29.501 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:29.759 [2024-07-24 19:13:35.387336] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.759 [2024-07-24 19:13:35.387654] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.759 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:30.018 malloc0 00:21:30.276 19:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:30.533 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x35R8oNgyX 00:21:30.812 [2024-07-24 19:13:36.303787] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1687575 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1687575 /var/tmp/bdevperf.sock 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1687575 ']' 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.812 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.812 [2024-07-24 19:13:36.374860] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:30.813 [2024-07-24 19:13:36.374948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687575 ] 00:21:30.813 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.813 [2024-07-24 19:13:36.453842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.078 [2024-07-24 19:13:36.599479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.078 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.078 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:31.078 19:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x35R8oNgyX 00:21:31.643 [2024-07-24 19:13:37.256250] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.643 [2024-07-24 19:13:37.256398] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.901 TLSTESTn1 00:21:31.901 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:32.467 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:32.467 "subsystems": [ 00:21:32.467 { 00:21:32.467 "subsystem": "keyring", 00:21:32.467 "config": [] 00:21:32.467 }, 00:21:32.467 { 00:21:32.467 "subsystem": "iobuf", 00:21:32.467 "config": [ 00:21:32.467 { 00:21:32.467 "method": "iobuf_set_options", 00:21:32.467 "params": { 00:21:32.467 "small_pool_count": 8192, 00:21:32.467 "large_pool_count": 1024, 00:21:32.467 "small_bufsize": 8192, 00:21:32.467 "large_bufsize": 135168 00:21:32.467 } 00:21:32.467 } 00:21:32.467 ] 00:21:32.467 }, 00:21:32.467 { 00:21:32.468 "subsystem": "sock", 00:21:32.468 "config": [ 00:21:32.468 { 00:21:32.468 "method": "sock_set_default_impl", 00:21:32.468 "params": { 00:21:32.468 "impl_name": "posix" 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "sock_impl_set_options", 00:21:32.468 "params": { 00:21:32.468 "impl_name": "ssl", 00:21:32.468 "recv_buf_size": 4096, 00:21:32.468 "send_buf_size": 4096, 00:21:32.468 "enable_recv_pipe": true, 00:21:32.468 "enable_quickack": false, 00:21:32.468 "enable_placement_id": 0, 00:21:32.468 "enable_zerocopy_send_server": true, 00:21:32.468 "enable_zerocopy_send_client": false, 00:21:32.468 "zerocopy_threshold": 0, 00:21:32.468 "tls_version": 0, 00:21:32.468 "enable_ktls": false 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "sock_impl_set_options", 00:21:32.468 "params": { 00:21:32.468 "impl_name": "posix", 00:21:32.468 "recv_buf_size": 2097152, 00:21:32.468 "send_buf_size": 2097152, 00:21:32.468 "enable_recv_pipe": true, 00:21:32.468 "enable_quickack": false, 00:21:32.468 "enable_placement_id": 0, 00:21:32.468 "enable_zerocopy_send_server": true, 00:21:32.468 "enable_zerocopy_send_client": false, 00:21:32.468 "zerocopy_threshold": 0, 00:21:32.468 "tls_version": 0, 00:21:32.468 "enable_ktls": false 00:21:32.468 } 00:21:32.468 } 00:21:32.468 ] 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "subsystem": "vmd", 00:21:32.468 "config": [] 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "subsystem": "accel", 00:21:32.468 "config": [ 00:21:32.468 { 00:21:32.468 "method": "accel_set_options", 00:21:32.468 "params": { 00:21:32.468 "small_cache_size": 128, 00:21:32.468 "large_cache_size": 16, 00:21:32.468 "task_count": 2048, 00:21:32.468 "sequence_count": 2048, 00:21:32.468 "buf_count": 2048 00:21:32.468 } 00:21:32.468 } 00:21:32.468 ] 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "subsystem": "bdev", 00:21:32.468 "config": [ 00:21:32.468 { 00:21:32.468 "method": "bdev_set_options", 00:21:32.468 "params": { 00:21:32.468 "bdev_io_pool_size": 65535, 00:21:32.468 "bdev_io_cache_size": 256, 00:21:32.468 "bdev_auto_examine": true, 00:21:32.468 "iobuf_small_cache_size": 128, 00:21:32.468 "iobuf_large_cache_size": 16 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "bdev_raid_set_options", 00:21:32.468 "params": { 00:21:32.468 "process_window_size_kb": 1024, 00:21:32.468 "process_max_bandwidth_mb_sec": 0 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "bdev_iscsi_set_options", 00:21:32.468 "params": { 00:21:32.468 "timeout_sec": 30 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "bdev_nvme_set_options", 00:21:32.468 "params": { 00:21:32.468 "action_on_timeout": "none", 00:21:32.468 "timeout_us": 0, 00:21:32.468 "timeout_admin_us": 0, 00:21:32.468 "keep_alive_timeout_ms": 10000, 00:21:32.468 "arbitration_burst": 0, 00:21:32.468 "low_priority_weight": 0, 00:21:32.468 "medium_priority_weight": 0, 00:21:32.468 "high_priority_weight": 0, 00:21:32.468 "nvme_adminq_poll_period_us": 10000, 00:21:32.468 "nvme_ioq_poll_period_us": 0, 00:21:32.468 "io_queue_requests": 0, 00:21:32.468 "delay_cmd_submit": true, 00:21:32.468 "transport_retry_count": 4, 00:21:32.468 "bdev_retry_count": 3, 00:21:32.468 "transport_ack_timeout": 0, 00:21:32.468 "ctrlr_loss_timeout_sec": 0, 00:21:32.468 "reconnect_delay_sec": 0, 00:21:32.468 "fast_io_fail_timeout_sec": 0, 00:21:32.468 "disable_auto_failback": false, 00:21:32.468 "generate_uuids": false, 00:21:32.468 "transport_tos": 0, 00:21:32.468 "nvme_error_stat": false, 00:21:32.468 "rdma_srq_size": 0, 00:21:32.468 "io_path_stat": false, 00:21:32.468 "allow_accel_sequence": false, 00:21:32.468 "rdma_max_cq_size": 0, 00:21:32.468 "rdma_cm_event_timeout_ms": 0, 00:21:32.468 "dhchap_digests": [ 00:21:32.468 "sha256", 00:21:32.468 "sha384", 00:21:32.468 "sha512" 00:21:32.468 ], 00:21:32.468 "dhchap_dhgroups": [ 00:21:32.468 "null", 00:21:32.468 "ffdhe2048", 00:21:32.468 "ffdhe3072", 00:21:32.468 "ffdhe4096", 00:21:32.468 "ffdhe6144", 00:21:32.468 "ffdhe8192" 00:21:32.468 ] 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "bdev_nvme_set_hotplug", 00:21:32.468 "params": { 00:21:32.468 "period_us": 100000, 00:21:32.468 "enable": false 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "bdev_malloc_create", 00:21:32.468 "params": { 00:21:32.468 "name": "malloc0", 00:21:32.468 "num_blocks": 8192, 00:21:32.468 "block_size": 4096, 00:21:32.468 "physical_block_size": 4096, 00:21:32.468 "uuid": "b2131cbc-c1fa-4387-9e21-7a7bb96dc644", 00:21:32.468 "optimal_io_boundary": 0, 00:21:32.468 "md_size": 0, 00:21:32.468 "dif_type": 0, 00:21:32.468 "dif_is_head_of_md": false, 00:21:32.468 "dif_pi_format": 0 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "bdev_wait_for_examine" 00:21:32.468 } 00:21:32.468 ] 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "subsystem": "nbd", 00:21:32.468 "config": [] 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "subsystem": "scheduler", 00:21:32.468 "config": [ 00:21:32.468 { 00:21:32.468 "method": "framework_set_scheduler", 00:21:32.468 "params": { 00:21:32.468 "name": "static" 00:21:32.468 } 00:21:32.468 } 00:21:32.468 ] 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "subsystem": "nvmf", 00:21:32.468 "config": [ 00:21:32.468 { 00:21:32.468 "method": "nvmf_set_config", 00:21:32.468 "params": { 00:21:32.468 "discovery_filter": "match_any", 00:21:32.468 "admin_cmd_passthru": { 00:21:32.468 "identify_ctrlr": false 00:21:32.468 } 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "nvmf_set_max_subsystems", 00:21:32.468 "params": { 00:21:32.468 "max_subsystems": 1024 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "nvmf_set_crdt", 00:21:32.468 "params": { 00:21:32.468 "crdt1": 0, 00:21:32.468 "crdt2": 0, 00:21:32.468 "crdt3": 0 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "nvmf_create_transport", 00:21:32.468 "params": { 00:21:32.468 "trtype": "TCP", 00:21:32.468 "max_queue_depth": 128, 00:21:32.468 "max_io_qpairs_per_ctrlr": 127, 00:21:32.468 "in_capsule_data_size": 4096, 00:21:32.468 "max_io_size": 131072, 00:21:32.468 "io_unit_size": 131072, 00:21:32.468 "max_aq_depth": 128, 00:21:32.468 "num_shared_buffers": 511, 00:21:32.468 "buf_cache_size": 4294967295, 00:21:32.468 "dif_insert_or_strip": false, 00:21:32.468 "zcopy": false, 00:21:32.468 "c2h_success": false, 00:21:32.468 "sock_priority": 0, 00:21:32.468 "abort_timeout_sec": 1, 00:21:32.468 "ack_timeout": 0, 00:21:32.468 "data_wr_pool_size": 0 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "nvmf_create_subsystem", 00:21:32.468 "params": { 00:21:32.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.468 "allow_any_host": false, 00:21:32.468 "serial_number": "SPDK00000000000001", 00:21:32.468 "model_number": "SPDK bdev Controller", 00:21:32.468 "max_namespaces": 10, 00:21:32.468 "min_cntlid": 1, 00:21:32.468 "max_cntlid": 65519, 00:21:32.468 "ana_reporting": false 00:21:32.468 } 00:21:32.468 }, 00:21:32.468 { 00:21:32.468 "method": "nvmf_subsystem_add_host", 00:21:32.468 "params": { 00:21:32.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.468 "host": "nqn.2016-06.io.spdk:host1", 00:21:32.469 "psk": "/tmp/tmp.x35R8oNgyX" 00:21:32.469 } 00:21:32.469 }, 00:21:32.469 { 00:21:32.469 "method": "nvmf_subsystem_add_ns", 00:21:32.469 "params": { 00:21:32.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.469 "namespace": { 00:21:32.469 "nsid": 1, 00:21:32.469 "bdev_name": "malloc0", 00:21:32.469 "nguid": "B2131CBCC1FA43879E217A7BB96DC644", 00:21:32.469 "uuid": "b2131cbc-c1fa-4387-9e21-7a7bb96dc644", 00:21:32.469 "no_auto_visible": false 00:21:32.469 } 00:21:32.469 } 00:21:32.469 }, 00:21:32.469 { 00:21:32.469 "method": "nvmf_subsystem_add_listener", 00:21:32.469 "params": { 00:21:32.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.469 "listen_address": { 00:21:32.469 "trtype": "TCP", 00:21:32.469 "adrfam": "IPv4", 00:21:32.469 "traddr": "10.0.0.2", 00:21:32.469 "trsvcid": "4420" 00:21:32.469 }, 00:21:32.469 "secure_channel": true 00:21:32.469 } 00:21:32.469 } 00:21:32.469 ] 00:21:32.469 } 00:21:32.469 ] 00:21:32.469 }' 00:21:32.469 19:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:32.727 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:32.727 "subsystems": [ 00:21:32.727 { 00:21:32.727 "subsystem": "keyring", 00:21:32.727 "config": [] 00:21:32.727 }, 00:21:32.727 { 00:21:32.727 "subsystem": "iobuf", 00:21:32.727 "config": [ 00:21:32.727 { 00:21:32.727 "method": "iobuf_set_options", 00:21:32.727 "params": { 00:21:32.727 "small_pool_count": 8192, 00:21:32.727 "large_pool_count": 1024, 00:21:32.727 "small_bufsize": 8192, 00:21:32.727 "large_bufsize": 135168 00:21:32.727 } 00:21:32.727 } 00:21:32.727 ] 00:21:32.727 }, 00:21:32.727 { 00:21:32.727 "subsystem": "sock", 00:21:32.727 "config": [ 00:21:32.727 { 00:21:32.727 "method": "sock_set_default_impl", 00:21:32.727 "params": { 00:21:32.727 "impl_name": "posix" 00:21:32.727 } 00:21:32.727 }, 00:21:32.727 { 00:21:32.727 "method": "sock_impl_set_options", 00:21:32.727 "params": { 00:21:32.727 "impl_name": "ssl", 00:21:32.727 "recv_buf_size": 4096, 00:21:32.727 "send_buf_size": 4096, 00:21:32.727 "enable_recv_pipe": true, 00:21:32.727 "enable_quickack": false, 00:21:32.727 "enable_placement_id": 0, 00:21:32.727 "enable_zerocopy_send_server": true, 00:21:32.727 "enable_zerocopy_send_client": false, 00:21:32.727 "zerocopy_threshold": 0, 00:21:32.727 "tls_version": 0, 00:21:32.727 "enable_ktls": false 00:21:32.727 } 00:21:32.727 }, 00:21:32.727 { 00:21:32.727 "method": "sock_impl_set_options", 00:21:32.727 "params": { 00:21:32.727 "impl_name": "posix", 00:21:32.727 "recv_buf_size": 2097152, 00:21:32.727 "send_buf_size": 2097152, 00:21:32.727 "enable_recv_pipe": true, 00:21:32.727 "enable_quickack": false, 00:21:32.727 "enable_placement_id": 0, 00:21:32.727 "enable_zerocopy_send_server": true, 00:21:32.727 "enable_zerocopy_send_client": false, 00:21:32.727 "zerocopy_threshold": 0, 00:21:32.727 "tls_version": 0, 00:21:32.727 "enable_ktls": false 00:21:32.727 } 00:21:32.727 } 00:21:32.727 ] 00:21:32.727 }, 00:21:32.727 { 00:21:32.727 "subsystem": "vmd", 00:21:32.727 "config": [] 00:21:32.727 }, 00:21:32.727 { 00:21:32.727 "subsystem": "accel", 00:21:32.727 "config": [ 00:21:32.727 { 00:21:32.727 "method": "accel_set_options", 00:21:32.727 "params": { 00:21:32.727 "small_cache_size": 128, 00:21:32.727 "large_cache_size": 16, 00:21:32.727 "task_count": 2048, 00:21:32.727 "sequence_count": 2048, 00:21:32.727 "buf_count": 2048 00:21:32.727 } 00:21:32.727 } 00:21:32.727 ] 00:21:32.727 }, 00:21:32.727 { 00:21:32.727 "subsystem": "bdev", 00:21:32.727 "config": [ 00:21:32.727 { 00:21:32.727 "method": "bdev_set_options", 00:21:32.728 "params": { 00:21:32.728 "bdev_io_pool_size": 65535, 00:21:32.728 "bdev_io_cache_size": 256, 00:21:32.728 "bdev_auto_examine": true, 00:21:32.728 "iobuf_small_cache_size": 128, 00:21:32.728 "iobuf_large_cache_size": 16 00:21:32.728 } 00:21:32.728 }, 00:21:32.728 { 00:21:32.728 "method": "bdev_raid_set_options", 00:21:32.728 "params": { 00:21:32.728 "process_window_size_kb": 1024, 00:21:32.728 "process_max_bandwidth_mb_sec": 0 00:21:32.728 } 00:21:32.728 }, 00:21:32.728 { 00:21:32.728 "method": "bdev_iscsi_set_options", 00:21:32.728 "params": { 00:21:32.728 "timeout_sec": 30 00:21:32.728 } 00:21:32.728 }, 00:21:32.728 { 00:21:32.728 "method": "bdev_nvme_set_options", 00:21:32.728 "params": { 00:21:32.728 "action_on_timeout": "none", 00:21:32.728 "timeout_us": 0, 00:21:32.728 "timeout_admin_us": 0, 00:21:32.728 "keep_alive_timeout_ms": 10000, 00:21:32.728 "arbitration_burst": 0, 00:21:32.728 "low_priority_weight": 0, 00:21:32.728 "medium_priority_weight": 0, 00:21:32.728 "high_priority_weight": 0, 00:21:32.728 "nvme_adminq_poll_period_us": 10000, 00:21:32.728 "nvme_ioq_poll_period_us": 0, 00:21:32.728 "io_queue_requests": 512, 00:21:32.728 "delay_cmd_submit": true, 00:21:32.728 "transport_retry_count": 4, 00:21:32.728 "bdev_retry_count": 3, 00:21:32.728 "transport_ack_timeout": 0, 00:21:32.728 "ctrlr_loss_timeout_sec": 0, 00:21:32.728 "reconnect_delay_sec": 0, 00:21:32.728 "fast_io_fail_timeout_sec": 0, 00:21:32.728 "disable_auto_failback": false, 00:21:32.728 "generate_uuids": false, 00:21:32.728 "transport_tos": 0, 00:21:32.728 "nvme_error_stat": false, 00:21:32.728 "rdma_srq_size": 0, 00:21:32.728 "io_path_stat": false, 00:21:32.728 "allow_accel_sequence": false, 00:21:32.728 "rdma_max_cq_size": 0, 00:21:32.728 "rdma_cm_event_timeout_ms": 0, 00:21:32.728 "dhchap_digests": [ 00:21:32.728 "sha256", 00:21:32.728 "sha384", 00:21:32.728 "sha512" 00:21:32.728 ], 00:21:32.728 "dhchap_dhgroups": [ 00:21:32.728 "null", 00:21:32.728 "ffdhe2048", 00:21:32.728 "ffdhe3072", 00:21:32.728 "ffdhe4096", 00:21:32.728 "ffdhe6144", 00:21:32.728 "ffdhe8192" 00:21:32.728 ] 00:21:32.728 } 00:21:32.728 }, 00:21:32.728 { 00:21:32.728 "method": "bdev_nvme_attach_controller", 00:21:32.728 "params": { 00:21:32.728 "name": "TLSTEST", 00:21:32.728 "trtype": "TCP", 00:21:32.728 "adrfam": "IPv4", 00:21:32.728 "traddr": "10.0.0.2", 00:21:32.728 "trsvcid": "4420", 00:21:32.728 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.728 "prchk_reftag": false, 00:21:32.728 "prchk_guard": false, 00:21:32.728 "ctrlr_loss_timeout_sec": 0, 00:21:32.728 "reconnect_delay_sec": 0, 00:21:32.728 "fast_io_fail_timeout_sec": 0, 00:21:32.728 "psk": "/tmp/tmp.x35R8oNgyX", 00:21:32.728 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.728 "hdgst": false, 00:21:32.728 "ddgst": false 00:21:32.728 } 00:21:32.728 }, 00:21:32.728 { 00:21:32.728 "method": "bdev_nvme_set_hotplug", 00:21:32.728 "params": { 00:21:32.728 "period_us": 100000, 00:21:32.728 "enable": false 00:21:32.728 } 00:21:32.728 }, 00:21:32.728 { 00:21:32.728 "method": "bdev_wait_for_examine" 00:21:32.728 } 00:21:32.728 ] 00:21:32.728 }, 00:21:32.728 { 00:21:32.728 "subsystem": "nbd", 00:21:32.728 "config": [] 00:21:32.728 } 00:21:32.728 ] 00:21:32.728 }' 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1687575 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1687575 ']' 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1687575 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687575 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687575' 00:21:32.728 killing process with pid 1687575 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1687575 00:21:32.728 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.728 00:21:32.728 Latency(us) 00:21:32.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.728 =================================================================================================================== 00:21:32.728 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:32.728 [2024-07-24 19:13:38.349717] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:32.728 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1687575 00:21:32.986 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1687212 00:21:32.986 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1687212 ']' 00:21:32.986 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1687212 00:21:32.986 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:32.986 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.244 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687212 00:21:33.244 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:33.244 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:33.244 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687212' 00:21:33.244 killing process with pid 1687212 00:21:33.244 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1687212 00:21:33.244 [2024-07-24 19:13:38.711794] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:33.244 19:13:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1687212 00:21:33.504 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:33.504 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:33.504 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:33.504 "subsystems": [ 00:21:33.504 { 00:21:33.504 "subsystem": "keyring", 00:21:33.504 "config": [] 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "subsystem": "iobuf", 00:21:33.504 "config": [ 00:21:33.504 { 00:21:33.504 "method": "iobuf_set_options", 00:21:33.504 "params": { 00:21:33.504 "small_pool_count": 8192, 00:21:33.504 "large_pool_count": 1024, 00:21:33.504 "small_bufsize": 8192, 00:21:33.504 "large_bufsize": 135168 00:21:33.504 } 00:21:33.504 } 00:21:33.504 ] 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "subsystem": "sock", 00:21:33.504 "config": [ 00:21:33.504 { 00:21:33.504 "method": "sock_set_default_impl", 00:21:33.504 "params": { 00:21:33.504 "impl_name": "posix" 00:21:33.504 } 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "method": "sock_impl_set_options", 00:21:33.504 "params": { 00:21:33.504 "impl_name": "ssl", 00:21:33.504 "recv_buf_size": 4096, 00:21:33.504 "send_buf_size": 4096, 00:21:33.504 "enable_recv_pipe": true, 00:21:33.504 "enable_quickack": false, 00:21:33.504 "enable_placement_id": 0, 00:21:33.504 "enable_zerocopy_send_server": true, 00:21:33.504 "enable_zerocopy_send_client": false, 00:21:33.504 "zerocopy_threshold": 0, 00:21:33.504 "tls_version": 0, 00:21:33.504 "enable_ktls": false 00:21:33.504 } 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "method": "sock_impl_set_options", 00:21:33.504 "params": { 00:21:33.504 "impl_name": "posix", 00:21:33.504 "recv_buf_size": 2097152, 00:21:33.504 "send_buf_size": 2097152, 00:21:33.504 "enable_recv_pipe": true, 00:21:33.504 "enable_quickack": false, 00:21:33.504 "enable_placement_id": 0, 00:21:33.504 "enable_zerocopy_send_server": true, 00:21:33.504 "enable_zerocopy_send_client": false, 00:21:33.504 "zerocopy_threshold": 0, 00:21:33.504 "tls_version": 0, 00:21:33.504 "enable_ktls": false 00:21:33.504 } 00:21:33.504 } 00:21:33.504 ] 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "subsystem": "vmd", 00:21:33.504 "config": [] 00:21:33.504 }, 00:21:33.504 { 00:21:33.504 "subsystem": "accel", 00:21:33.504 "config": [ 00:21:33.504 { 00:21:33.504 "method": "accel_set_options", 00:21:33.504 "params": { 00:21:33.504 "small_cache_size": 128, 00:21:33.504 "large_cache_size": 16, 00:21:33.504 "task_count": 2048, 00:21:33.504 "sequence_count": 2048, 00:21:33.505 "buf_count": 2048 00:21:33.505 } 00:21:33.505 } 00:21:33.505 ] 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "subsystem": "bdev", 00:21:33.505 "config": [ 00:21:33.505 { 00:21:33.505 "method": "bdev_set_options", 00:21:33.505 "params": { 00:21:33.505 "bdev_io_pool_size": 65535, 00:21:33.505 "bdev_io_cache_size": 256, 00:21:33.505 "bdev_auto_examine": true, 00:21:33.505 "iobuf_small_cache_size": 128, 00:21:33.505 "iobuf_large_cache_size": 16 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "bdev_raid_set_options", 00:21:33.505 "params": { 00:21:33.505 "process_window_size_kb": 1024, 00:21:33.505 "process_max_bandwidth_mb_sec": 0 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "bdev_iscsi_set_options", 00:21:33.505 "params": { 00:21:33.505 "timeout_sec": 30 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "bdev_nvme_set_options", 00:21:33.505 "params": { 00:21:33.505 "action_on_timeout": "none", 00:21:33.505 "timeout_us": 0, 00:21:33.505 "timeout_admin_us": 0, 00:21:33.505 "keep_alive_timeout_ms": 10000, 00:21:33.505 "arbitration_burst": 0, 00:21:33.505 "low_priority_weight": 0, 00:21:33.505 "medium_priority_weight": 0, 00:21:33.505 "high_priority_weight": 0, 00:21:33.505 "nvme_adminq_poll_period_us": 10000, 00:21:33.505 "nvme_ioq_poll_period_us": 0, 00:21:33.505 "io_queue_requests": 0, 00:21:33.505 "delay_cmd_submit": true, 00:21:33.505 "transport_retry_count": 4, 00:21:33.505 "bdev_retry_count": 3, 00:21:33.505 "transport_ack_timeout": 0, 00:21:33.505 "ctrlr_loss_timeout_sec": 0, 00:21:33.505 "reconnect_delay_sec": 0, 00:21:33.505 "fast_io_fail_timeout_sec": 0, 00:21:33.505 "disable_auto_failback": false, 00:21:33.505 "generate_uuids": false, 00:21:33.505 "transport_tos": 0, 00:21:33.505 "nvme_error_stat": false, 00:21:33.505 "rdma_srq_size": 0, 00:21:33.505 "io_path_stat": false, 00:21:33.505 "allow_accel_sequence": false, 00:21:33.505 "rdma_max_cq_size": 0, 00:21:33.505 "rdma_cm_event_timeout_ms": 0, 00:21:33.505 "dhchap_digests": [ 00:21:33.505 "sha256", 00:21:33.505 "sha384", 00:21:33.505 "sha512" 00:21:33.505 ], 00:21:33.505 "dhchap_dhgroups": [ 00:21:33.505 "null", 00:21:33.505 "ffdhe2048", 00:21:33.505 "ffdhe3072", 00:21:33.505 "ffdhe4096", 00:21:33.505 "ffdhe6144", 00:21:33.505 "ffdhe8192" 00:21:33.505 ] 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "bdev_nvme_set_hotplug", 00:21:33.505 "params": { 00:21:33.505 "period_us": 100000, 00:21:33.505 "enable": false 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "bdev_malloc_create", 00:21:33.505 "params": { 00:21:33.505 "name": "malloc0", 00:21:33.505 "num_blocks": 8192, 00:21:33.505 "block_size": 4096, 00:21:33.505 "physical_block_size": 4096, 00:21:33.505 "uuid": "b2131cbc-c1fa-4387-9e21-7a7bb96dc644", 00:21:33.505 "optimal_io_boundary": 0, 00:21:33.505 "md_size": 0, 00:21:33.505 "dif_type": 0, 00:21:33.505 "dif_is_head_of_md": false, 00:21:33.505 "dif_pi_format": 0 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "bdev_wait_for_examine" 00:21:33.505 } 00:21:33.505 ] 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "subsystem": "nbd", 00:21:33.505 "config": [] 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "subsystem": "scheduler", 00:21:33.505 "config": [ 00:21:33.505 { 00:21:33.505 "method": "framework_set_scheduler", 00:21:33.505 "params": { 00:21:33.505 "name": "static" 00:21:33.505 } 00:21:33.505 } 00:21:33.505 ] 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "subsystem": "nvmf", 00:21:33.505 "config": [ 00:21:33.505 { 00:21:33.505 "method": "nvmf_set_config", 00:21:33.505 "params": { 00:21:33.505 "discovery_filter": "match_any", 00:21:33.505 "admin_cmd_passthru": { 00:21:33.505 "identify_ctrlr": false 00:21:33.505 } 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "nvmf_set_max_subsystems", 00:21:33.505 "params": { 00:21:33.505 "max_subsystems": 1024 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "nvmf_set_crdt", 00:21:33.505 "params": { 00:21:33.505 "crdt1": 0, 00:21:33.505 "crdt2": 0, 00:21:33.505 "crdt3": 0 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "nvmf_create_transport", 00:21:33.505 "params": { 00:21:33.505 "trtype": "TCP", 00:21:33.505 "max_queue_depth": 128, 00:21:33.505 "max_io_qpairs_per_ctrlr": 127, 00:21:33.505 "in_capsule_data_size": 4096, 00:21:33.505 "max_io_size": 131072, 00:21:33.505 "io_unit_size": 131072, 00:21:33.505 "max_aq_depth": 128, 00:21:33.505 "num_shared_buffers": 511, 00:21:33.505 "buf_cache_size": 4294967295, 00:21:33.505 "dif_insert_or_strip": false, 00:21:33.505 "zcopy": false, 00:21:33.505 "c2h_success": false, 00:21:33.505 "sock_priority": 0, 00:21:33.505 "abort_timeout_sec": 1, 00:21:33.505 "ack_timeout": 0, 00:21:33.505 "data_wr_pool_size": 0 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "nvmf_create_subsystem", 00:21:33.505 "params": { 00:21:33.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.505 "allow_any_host": false, 00:21:33.505 "serial_number": "SPDK00000000000001", 00:21:33.505 "model_number": "SPDK bdev Controller", 00:21:33.505 "max_namespaces": 10, 00:21:33.505 "min_cntlid": 1, 00:21:33.505 "max_cntlid": 65519, 00:21:33.505 "ana_reporting": false 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "nvmf_subsystem_add_host", 00:21:33.505 "params": { 00:21:33.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.505 "host": "nqn.2016-06.io.spdk:host1", 00:21:33.505 "psk": "/tmp/tmp.x35R8oNgyX" 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "nvmf_subsystem_add_ns", 00:21:33.505 "params": { 00:21:33.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.505 "namespace": { 00:21:33.505 "nsid": 1, 00:21:33.505 "bdev_name": "malloc0", 00:21:33.505 "nguid": "B2131CBCC1FA43879E217A7BB96DC644", 00:21:33.505 "uuid": "b2131cbc-c1fa-4387-9e21-7a7bb96dc644", 00:21:33.505 "no_auto_visible": false 00:21:33.505 } 00:21:33.505 } 00:21:33.505 }, 00:21:33.505 { 00:21:33.505 "method": "nvmf_subsystem_add_listener", 00:21:33.505 "params": { 00:21:33.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.505 "listen_address": { 00:21:33.505 "trtype": "TCP", 00:21:33.505 "adrfam": "IPv4", 00:21:33.505 "traddr": "10.0.0.2", 00:21:33.505 "trsvcid": "4420" 00:21:33.505 }, 00:21:33.505 "secure_channel": true 00:21:33.505 } 00:21:33.505 } 00:21:33.505 ] 00:21:33.505 } 00:21:33.505 ] 00:21:33.505 }' 00:21:33.505 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.505 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.505 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1687902 00:21:33.505 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:33.506 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1687902 00:21:33.506 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1687902 ']' 00:21:33.506 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.506 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.506 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.506 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.506 19:13:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.506 [2024-07-24 19:13:39.162144] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:33.506 [2024-07-24 19:13:39.162325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.764 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.764 [2024-07-24 19:13:39.279784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.764 [2024-07-24 19:13:39.420795] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.764 [2024-07-24 19:13:39.420874] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.764 [2024-07-24 19:13:39.420894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.764 [2024-07-24 19:13:39.420922] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.764 [2024-07-24 19:13:39.420936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.764 [2024-07-24 19:13:39.421040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.022 [2024-07-24 19:13:39.678667] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.022 [2024-07-24 19:13:39.707510] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:34.279 [2024-07-24 19:13:39.723583] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.279 [2024-07-24 19:13:39.723863] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1688054 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1688054 /var/tmp/bdevperf.sock 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1688054 ']' 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.845 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:34.845 "subsystems": [ 00:21:34.845 { 00:21:34.845 "subsystem": "keyring", 00:21:34.845 "config": [] 00:21:34.845 }, 00:21:34.845 { 00:21:34.845 "subsystem": "iobuf", 00:21:34.845 "config": [ 00:21:34.845 { 00:21:34.845 "method": "iobuf_set_options", 00:21:34.845 "params": { 00:21:34.845 "small_pool_count": 8192, 00:21:34.845 "large_pool_count": 1024, 00:21:34.845 "small_bufsize": 8192, 00:21:34.845 "large_bufsize": 135168 00:21:34.845 } 00:21:34.845 } 00:21:34.845 ] 00:21:34.845 }, 00:21:34.845 { 00:21:34.845 "subsystem": "sock", 00:21:34.845 "config": [ 00:21:34.845 { 00:21:34.845 "method": "sock_set_default_impl", 00:21:34.845 "params": { 00:21:34.845 "impl_name": "posix" 00:21:34.845 } 00:21:34.845 }, 00:21:34.845 { 00:21:34.845 "method": "sock_impl_set_options", 00:21:34.845 "params": { 00:21:34.845 "impl_name": "ssl", 00:21:34.845 "recv_buf_size": 4096, 00:21:34.845 "send_buf_size": 4096, 00:21:34.845 "enable_recv_pipe": true, 00:21:34.845 "enable_quickack": false, 00:21:34.845 "enable_placement_id": 0, 00:21:34.845 "enable_zerocopy_send_server": true, 00:21:34.845 "enable_zerocopy_send_client": false, 00:21:34.845 "zerocopy_threshold": 0, 00:21:34.845 "tls_version": 0, 00:21:34.845 "enable_ktls": false 00:21:34.845 } 00:21:34.845 }, 00:21:34.845 { 00:21:34.845 "method": "sock_impl_set_options", 00:21:34.845 "params": { 00:21:34.845 "impl_name": "posix", 00:21:34.845 "recv_buf_size": 2097152, 00:21:34.845 "send_buf_size": 2097152, 00:21:34.845 "enable_recv_pipe": true, 00:21:34.845 "enable_quickack": false, 00:21:34.845 "enable_placement_id": 0, 00:21:34.845 "enable_zerocopy_send_server": true, 00:21:34.845 "enable_zerocopy_send_client": false, 00:21:34.845 "zerocopy_threshold": 0, 00:21:34.845 "tls_version": 0, 00:21:34.845 "enable_ktls": false 00:21:34.845 } 00:21:34.845 } 00:21:34.845 ] 00:21:34.845 }, 00:21:34.845 { 00:21:34.845 "subsystem": "vmd", 00:21:34.845 "config": [] 00:21:34.845 }, 00:21:34.845 { 00:21:34.845 "subsystem": "accel", 00:21:34.845 "config": [ 00:21:34.845 { 00:21:34.845 "method": "accel_set_options", 00:21:34.845 "params": { 00:21:34.845 "small_cache_size": 128, 00:21:34.845 "large_cache_size": 16, 00:21:34.845 "task_count": 2048, 00:21:34.845 "sequence_count": 2048, 00:21:34.845 "buf_count": 2048 00:21:34.845 } 00:21:34.845 } 00:21:34.845 ] 00:21:34.845 }, 00:21:34.845 { 00:21:34.845 "subsystem": "bdev", 00:21:34.845 "config": [ 00:21:34.845 { 00:21:34.845 "method": "bdev_set_options", 00:21:34.845 "params": { 00:21:34.845 "bdev_io_pool_size": 65535, 00:21:34.845 "bdev_io_cache_size": 256, 00:21:34.845 "bdev_auto_examine": true, 00:21:34.845 "iobuf_small_cache_size": 128, 00:21:34.845 "iobuf_large_cache_size": 16 00:21:34.846 } 00:21:34.846 }, 00:21:34.846 { 00:21:34.846 "method": "bdev_raid_set_options", 00:21:34.846 "params": { 00:21:34.846 "process_window_size_kb": 1024, 00:21:34.846 "process_max_bandwidth_mb_sec": 0 00:21:34.846 } 00:21:34.846 }, 00:21:34.846 { 00:21:34.846 "method": "bdev_iscsi_set_options", 00:21:34.846 "params": { 00:21:34.846 "timeout_sec": 30 00:21:34.846 } 00:21:34.846 }, 00:21:34.846 { 00:21:34.846 "method": "bdev_nvme_set_options", 00:21:34.846 "params": { 00:21:34.846 "action_on_timeout": "none", 00:21:34.846 "timeout_us": 0, 00:21:34.846 "timeout_admin_us": 0, 00:21:34.846 "keep_alive_timeout_ms": 10000, 00:21:34.846 "arbitration_burst": 0, 00:21:34.846 "low_priority_weight": 0, 00:21:34.846 "medium_priority_weight": 0, 00:21:34.846 "high_priority_weight": 0, 00:21:34.846 "nvme_adminq_poll_period_us": 10000, 00:21:34.846 "nvme_ioq_poll_period_us": 0, 00:21:34.846 "io_queue_requests": 512, 00:21:34.846 "delay_cmd_submit": true, 00:21:34.846 "transport_retry_count": 4, 00:21:34.846 "bdev_retry_count": 3, 00:21:34.846 "transport_ack_timeout": 0, 00:21:34.846 "ctrlr_loss_timeout_sec": 0, 00:21:34.846 "reconnect_delay_sec": 0, 00:21:34.846 "fast_io_fail_timeout_sec": 0, 00:21:34.846 "disable_auto_failback": false, 00:21:34.846 "generate_uuids": false, 00:21:34.846 "transport_tos": 0, 00:21:34.846 "nvme_error_stat": false, 00:21:34.846 "rdma_srq_size": 0, 00:21:34.846 "io_path_stat": false, 00:21:34.846 "allow_accel_sequence": false, 00:21:34.846 "rdma_max_cq_size": 0, 00:21:34.846 "rdma_cm_event_timeout_ms": 0, 00:21:34.846 "dhchap_digests": [ 00:21:34.846 "sha256", 00:21:34.846 "sha384", 00:21:34.846 "sha512" 00:21:34.846 ], 00:21:34.846 "dhchap_dhgroups": [ 00:21:34.846 "null", 00:21:34.846 "ffdhe2048", 00:21:34.846 "ffdhe3072", 00:21:34.846 "ffdhe4096", 00:21:34.846 "ffdhe6144", 00:21:34.846 "ffdhe8192" 00:21:34.846 ] 00:21:34.846 } 00:21:34.846 }, 00:21:34.846 { 00:21:34.846 "method": "bdev_nvme_attach_controller", 00:21:34.846 "params": { 00:21:34.846 "name": "TLSTEST", 00:21:34.846 "trtype": "TCP", 00:21:34.846 "adrfam": "IPv4", 00:21:34.846 "traddr": "10.0.0.2", 00:21:34.846 "trsvcid": "4420", 00:21:34.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.846 "prchk_reftag": false, 00:21:34.846 "prchk_guard": false, 00:21:34.846 "ctrlr_loss_timeout_sec": 0, 00:21:34.846 "reconnect_delay_sec": 0, 00:21:34.846 "fast_io_fail_timeout_sec": 0, 00:21:34.846 "psk": "/tmp/tmp.x35R8oNgyX", 00:21:34.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.846 "hdgst": false, 00:21:34.846 "ddgst": false 00:21:34.846 } 00:21:34.846 }, 00:21:34.846 { 00:21:34.846 "method": "bdev_nvme_set_hotplug", 00:21:34.846 "params": { 00:21:34.846 "period_us": 100000, 00:21:34.846 "enable": false 00:21:34.846 } 00:21:34.846 }, 00:21:34.846 { 00:21:34.846 "method": "bdev_wait_for_examine" 00:21:34.846 } 00:21:34.846 ] 00:21:34.846 }, 00:21:34.846 { 00:21:34.846 "subsystem": "nbd", 00:21:34.846 "config": [] 00:21:34.846 } 00:21:34.846 ] 00:21:34.846 }' 00:21:34.846 [2024-07-24 19:13:40.411899] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:34.846 [2024-07-24 19:13:40.412014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688054 ] 00:21:34.846 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.846 [2024-07-24 19:13:40.493847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.104 [2024-07-24 19:13:40.635656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.363 [2024-07-24 19:13:40.825155] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.363 [2024-07-24 19:13:40.825298] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:35.363 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.363 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:35.363 19:13:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:35.621 Running I/O for 10 seconds... 00:21:45.592 00:21:45.592 Latency(us) 00:21:45.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.592 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:45.592 Verification LBA range: start 0x0 length 0x2000 00:21:45.592 TLSTESTn1 : 10.03 2614.92 10.21 0.00 0.00 48837.71 12087.75 50875.35 00:21:45.592 =================================================================================================================== 00:21:45.592 Total : 2614.92 10.21 0.00 0.00 48837.71 12087.75 50875.35 00:21:45.592 0 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1688054 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1688054 ']' 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1688054 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688054 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688054' 00:21:45.592 killing process with pid 1688054 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1688054 00:21:45.592 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.592 00:21:45.592 Latency(us) 00:21:45.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.592 =================================================================================================================== 00:21:45.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.592 [2024-07-24 19:13:51.210591] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:45.592 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1688054 00:21:45.851 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1687902 00:21:45.851 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1687902 ']' 00:21:45.851 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1687902 00:21:45.851 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:45.851 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.851 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1687902 00:21:46.109 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:46.109 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:46.109 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1687902' 00:21:46.109 killing process with pid 1687902 00:21:46.109 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1687902 00:21:46.109 [2024-07-24 19:13:51.575402] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:46.109 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1687902 00:21:46.367 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1689375 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1689375 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1689375 ']' 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.368 19:13:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.368 [2024-07-24 19:13:51.982829] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:46.368 [2024-07-24 19:13:51.982934] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.368 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.626 [2024-07-24 19:13:52.091251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.626 [2024-07-24 19:13:52.297470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.626 [2024-07-24 19:13:52.297574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.626 [2024-07-24 19:13:52.297610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.626 [2024-07-24 19:13:52.297640] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.626 [2024-07-24 19:13:52.297665] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.626 [2024-07-24 19:13:52.297729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.x35R8oNgyX 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.x35R8oNgyX 00:21:48.004 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:48.264 [2024-07-24 19:13:53.806585] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.264 19:13:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:48.832 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:49.090 [2024-07-24 19:13:54.686043] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.090 [2024-07-24 19:13:54.686521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.090 19:13:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:49.350 malloc0 00:21:49.350 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:49.918 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x35R8oNgyX 00:21:50.485 [2024-07-24 19:13:55.906128] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1689791 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1689791 /var/tmp/bdevperf.sock 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1689791 ']' 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.486 19:13:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.486 [2024-07-24 19:13:56.008533] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:50.486 [2024-07-24 19:13:56.008626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689791 ] 00:21:50.486 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.486 [2024-07-24 19:13:56.101626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.751 [2024-07-24 19:13:56.246488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.751 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.751 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:50.751 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.x35R8oNgyX 00:21:51.371 19:13:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:51.938 [2024-07-24 19:13:57.378260] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.938 nvme0n1 00:21:51.938 19:13:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:52.197 Running I/O for 1 seconds... 00:21:53.133 00:21:53.133 Latency(us) 00:21:53.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.133 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:53.133 Verification LBA range: start 0x0 length 0x2000 00:21:53.133 nvme0n1 : 1.03 2565.68 10.02 0.00 0.00 49225.38 8107.05 43690.67 00:21:53.133 =================================================================================================================== 00:21:53.133 Total : 2565.68 10.02 0.00 0.00 49225.38 8107.05 43690.67 00:21:53.133 0 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1689791 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1689791 ']' 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1689791 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689791 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689791' 00:21:53.133 killing process with pid 1689791 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1689791 00:21:53.133 Received shutdown signal, test time was about 1.000000 seconds 00:21:53.133 00:21:53.133 Latency(us) 00:21:53.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.133 =================================================================================================================== 00:21:53.133 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.133 19:13:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1689791 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1689375 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1689375 ']' 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1689375 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689375 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689375' 00:21:53.700 killing process with pid 1689375 00:21:53.700 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1689375 00:21:53.701 [2024-07-24 19:13:59.153861] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:53.701 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1689375 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1690215 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1690215 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1690215 ']' 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.959 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.959 [2024-07-24 19:13:59.573733] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:53.959 [2024-07-24 19:13:59.573826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.959 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.959 [2024-07-24 19:13:59.655310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.217 [2024-07-24 19:13:59.789886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.217 [2024-07-24 19:13:59.789956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.217 [2024-07-24 19:13:59.789976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.217 [2024-07-24 19:13:59.789992] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.217 [2024-07-24 19:13:59.790006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.218 [2024-07-24 19:13:59.790042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.476 19:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.476 [2024-07-24 19:13:59.964973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.476 malloc0 00:21:54.476 [2024-07-24 19:13:59.999357] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:54.476 [2024-07-24 19:14:00.007629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1690353 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1690353 /var/tmp/bdevperf.sock 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1690353 ']' 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.476 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.476 [2024-07-24 19:14:00.086858] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:54.476 [2024-07-24 19:14:00.086968] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690353 ] 00:21:54.476 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.476 [2024-07-24 19:14:00.164986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.735 [2024-07-24 19:14:00.305511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.993 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.993 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:54.993 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.x35R8oNgyX 00:21:55.251 19:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:55.509 [2024-07-24 19:14:01.088302] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.509 nvme0n1 00:21:55.509 19:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.768 Running I/O for 1 seconds... 00:21:56.703 00:21:56.703 Latency(us) 00:21:56.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.703 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:56.703 Verification LBA range: start 0x0 length 0x2000 00:21:56.703 nvme0n1 : 1.03 2605.63 10.18 0.00 0.00 48419.67 9029.40 44661.57 00:21:56.703 =================================================================================================================== 00:21:56.703 Total : 2605.63 10.18 0.00 0.00 48419.67 9029.40 44661.57 00:21:56.703 0 00:21:56.703 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:56.703 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.703 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.961 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.961 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:56.961 "subsystems": [ 00:21:56.961 { 00:21:56.961 "subsystem": "keyring", 00:21:56.961 "config": [ 00:21:56.961 { 00:21:56.961 "method": "keyring_file_add_key", 00:21:56.961 "params": { 00:21:56.961 "name": "key0", 00:21:56.961 "path": "/tmp/tmp.x35R8oNgyX" 00:21:56.961 } 00:21:56.961 } 00:21:56.961 ] 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "subsystem": "iobuf", 00:21:56.961 "config": [ 00:21:56.961 { 00:21:56.961 "method": "iobuf_set_options", 00:21:56.961 "params": { 00:21:56.961 "small_pool_count": 8192, 00:21:56.961 "large_pool_count": 1024, 00:21:56.961 "small_bufsize": 8192, 00:21:56.961 "large_bufsize": 135168 00:21:56.961 } 00:21:56.961 } 00:21:56.961 ] 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "subsystem": "sock", 00:21:56.961 "config": [ 00:21:56.961 { 00:21:56.961 "method": "sock_set_default_impl", 00:21:56.961 "params": { 00:21:56.961 "impl_name": "posix" 00:21:56.961 } 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "method": "sock_impl_set_options", 00:21:56.961 "params": { 00:21:56.961 "impl_name": "ssl", 00:21:56.961 "recv_buf_size": 4096, 00:21:56.961 "send_buf_size": 4096, 00:21:56.961 "enable_recv_pipe": true, 00:21:56.961 "enable_quickack": false, 00:21:56.961 "enable_placement_id": 0, 00:21:56.961 "enable_zerocopy_send_server": true, 00:21:56.961 "enable_zerocopy_send_client": false, 00:21:56.961 "zerocopy_threshold": 0, 00:21:56.961 "tls_version": 0, 00:21:56.961 "enable_ktls": false 00:21:56.961 } 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "method": "sock_impl_set_options", 00:21:56.961 "params": { 00:21:56.961 "impl_name": "posix", 00:21:56.961 "recv_buf_size": 2097152, 00:21:56.961 "send_buf_size": 2097152, 00:21:56.961 "enable_recv_pipe": true, 00:21:56.961 "enable_quickack": false, 00:21:56.961 "enable_placement_id": 0, 00:21:56.961 "enable_zerocopy_send_server": true, 00:21:56.961 "enable_zerocopy_send_client": false, 00:21:56.961 "zerocopy_threshold": 0, 00:21:56.961 "tls_version": 0, 00:21:56.961 "enable_ktls": false 00:21:56.961 } 00:21:56.961 } 00:21:56.961 ] 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "subsystem": "vmd", 00:21:56.961 "config": [] 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "subsystem": "accel", 00:21:56.961 "config": [ 00:21:56.961 { 00:21:56.961 "method": "accel_set_options", 00:21:56.961 "params": { 00:21:56.961 "small_cache_size": 128, 00:21:56.961 "large_cache_size": 16, 00:21:56.961 "task_count": 2048, 00:21:56.961 "sequence_count": 2048, 00:21:56.961 "buf_count": 2048 00:21:56.961 } 00:21:56.961 } 00:21:56.961 ] 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "subsystem": "bdev", 00:21:56.961 "config": [ 00:21:56.961 { 00:21:56.961 "method": "bdev_set_options", 00:21:56.961 "params": { 00:21:56.961 "bdev_io_pool_size": 65535, 00:21:56.961 "bdev_io_cache_size": 256, 00:21:56.961 "bdev_auto_examine": true, 00:21:56.961 "iobuf_small_cache_size": 128, 00:21:56.961 "iobuf_large_cache_size": 16 00:21:56.961 } 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "method": "bdev_raid_set_options", 00:21:56.961 "params": { 00:21:56.961 "process_window_size_kb": 1024, 00:21:56.961 "process_max_bandwidth_mb_sec": 0 00:21:56.961 } 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "method": "bdev_iscsi_set_options", 00:21:56.961 "params": { 00:21:56.961 "timeout_sec": 30 00:21:56.961 } 00:21:56.961 }, 00:21:56.961 { 00:21:56.961 "method": "bdev_nvme_set_options", 00:21:56.961 "params": { 00:21:56.961 "action_on_timeout": "none", 00:21:56.961 "timeout_us": 0, 00:21:56.961 "timeout_admin_us": 0, 00:21:56.961 "keep_alive_timeout_ms": 10000, 00:21:56.961 "arbitration_burst": 0, 00:21:56.961 "low_priority_weight": 0, 00:21:56.961 "medium_priority_weight": 0, 00:21:56.961 "high_priority_weight": 0, 00:21:56.961 "nvme_adminq_poll_period_us": 10000, 00:21:56.961 "nvme_ioq_poll_period_us": 0, 00:21:56.961 "io_queue_requests": 0, 00:21:56.961 "delay_cmd_submit": true, 00:21:56.961 "transport_retry_count": 4, 00:21:56.961 "bdev_retry_count": 3, 00:21:56.961 "transport_ack_timeout": 0, 00:21:56.961 "ctrlr_loss_timeout_sec": 0, 00:21:56.961 "reconnect_delay_sec": 0, 00:21:56.961 "fast_io_fail_timeout_sec": 0, 00:21:56.961 "disable_auto_failback": false, 00:21:56.962 "generate_uuids": false, 00:21:56.962 "transport_tos": 0, 00:21:56.962 "nvme_error_stat": false, 00:21:56.962 "rdma_srq_size": 0, 00:21:56.962 "io_path_stat": false, 00:21:56.962 "allow_accel_sequence": false, 00:21:56.962 "rdma_max_cq_size": 0, 00:21:56.962 "rdma_cm_event_timeout_ms": 0, 00:21:56.962 "dhchap_digests": [ 00:21:56.962 "sha256", 00:21:56.962 "sha384", 00:21:56.962 "sha512" 00:21:56.962 ], 00:21:56.962 "dhchap_dhgroups": [ 00:21:56.962 "null", 00:21:56.962 "ffdhe2048", 00:21:56.962 "ffdhe3072", 00:21:56.962 "ffdhe4096", 00:21:56.962 "ffdhe6144", 00:21:56.962 "ffdhe8192" 00:21:56.962 ] 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "bdev_nvme_set_hotplug", 00:21:56.962 "params": { 00:21:56.962 "period_us": 100000, 00:21:56.962 "enable": false 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "bdev_malloc_create", 00:21:56.962 "params": { 00:21:56.962 "name": "malloc0", 00:21:56.962 "num_blocks": 8192, 00:21:56.962 "block_size": 4096, 00:21:56.962 "physical_block_size": 4096, 00:21:56.962 "uuid": "00297548-49ee-42fa-a5bc-f709c3ffc914", 00:21:56.962 "optimal_io_boundary": 0, 00:21:56.962 "md_size": 0, 00:21:56.962 "dif_type": 0, 00:21:56.962 "dif_is_head_of_md": false, 00:21:56.962 "dif_pi_format": 0 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "bdev_wait_for_examine" 00:21:56.962 } 00:21:56.962 ] 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "subsystem": "nbd", 00:21:56.962 "config": [] 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "subsystem": "scheduler", 00:21:56.962 "config": [ 00:21:56.962 { 00:21:56.962 "method": "framework_set_scheduler", 00:21:56.962 "params": { 00:21:56.962 "name": "static" 00:21:56.962 } 00:21:56.962 } 00:21:56.962 ] 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "subsystem": "nvmf", 00:21:56.962 "config": [ 00:21:56.962 { 00:21:56.962 "method": "nvmf_set_config", 00:21:56.962 "params": { 00:21:56.962 "discovery_filter": "match_any", 00:21:56.962 "admin_cmd_passthru": { 00:21:56.962 "identify_ctrlr": false 00:21:56.962 } 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "nvmf_set_max_subsystems", 00:21:56.962 "params": { 00:21:56.962 "max_subsystems": 1024 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "nvmf_set_crdt", 00:21:56.962 "params": { 00:21:56.962 "crdt1": 0, 00:21:56.962 "crdt2": 0, 00:21:56.962 "crdt3": 0 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "nvmf_create_transport", 00:21:56.962 "params": { 00:21:56.962 "trtype": "TCP", 00:21:56.962 "max_queue_depth": 128, 00:21:56.962 "max_io_qpairs_per_ctrlr": 127, 00:21:56.962 "in_capsule_data_size": 4096, 00:21:56.962 "max_io_size": 131072, 00:21:56.962 "io_unit_size": 131072, 00:21:56.962 "max_aq_depth": 128, 00:21:56.962 "num_shared_buffers": 511, 00:21:56.962 "buf_cache_size": 4294967295, 00:21:56.962 "dif_insert_or_strip": false, 00:21:56.962 "zcopy": false, 00:21:56.962 "c2h_success": false, 00:21:56.962 "sock_priority": 0, 00:21:56.962 "abort_timeout_sec": 1, 00:21:56.962 "ack_timeout": 0, 00:21:56.962 "data_wr_pool_size": 0 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "nvmf_create_subsystem", 00:21:56.962 "params": { 00:21:56.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.962 "allow_any_host": false, 00:21:56.962 "serial_number": "00000000000000000000", 00:21:56.962 "model_number": "SPDK bdev Controller", 00:21:56.962 "max_namespaces": 32, 00:21:56.962 "min_cntlid": 1, 00:21:56.962 "max_cntlid": 65519, 00:21:56.962 "ana_reporting": false 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "nvmf_subsystem_add_host", 00:21:56.962 "params": { 00:21:56.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.962 "host": "nqn.2016-06.io.spdk:host1", 00:21:56.962 "psk": "key0" 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "nvmf_subsystem_add_ns", 00:21:56.962 "params": { 00:21:56.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.962 "namespace": { 00:21:56.962 "nsid": 1, 00:21:56.962 "bdev_name": "malloc0", 00:21:56.962 "nguid": "0029754849EE42FAA5BCF709C3FFC914", 00:21:56.962 "uuid": "00297548-49ee-42fa-a5bc-f709c3ffc914", 00:21:56.962 "no_auto_visible": false 00:21:56.962 } 00:21:56.962 } 00:21:56.962 }, 00:21:56.962 { 00:21:56.962 "method": "nvmf_subsystem_add_listener", 00:21:56.962 "params": { 00:21:56.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.962 "listen_address": { 00:21:56.962 "trtype": "TCP", 00:21:56.962 "adrfam": "IPv4", 00:21:56.962 "traddr": "10.0.0.2", 00:21:56.962 "trsvcid": "4420" 00:21:56.962 }, 00:21:56.962 "secure_channel": false, 00:21:56.962 "sock_impl": "ssl" 00:21:56.962 } 00:21:56.962 } 00:21:56.962 ] 00:21:56.962 } 00:21:56.962 ] 00:21:56.962 }' 00:21:56.962 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:57.220 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:57.220 "subsystems": [ 00:21:57.220 { 00:21:57.220 "subsystem": "keyring", 00:21:57.220 "config": [ 00:21:57.220 { 00:21:57.220 "method": "keyring_file_add_key", 00:21:57.220 "params": { 00:21:57.220 "name": "key0", 00:21:57.220 "path": "/tmp/tmp.x35R8oNgyX" 00:21:57.220 } 00:21:57.220 } 00:21:57.220 ] 00:21:57.220 }, 00:21:57.220 { 00:21:57.220 "subsystem": "iobuf", 00:21:57.220 "config": [ 00:21:57.220 { 00:21:57.220 "method": "iobuf_set_options", 00:21:57.220 "params": { 00:21:57.220 "small_pool_count": 8192, 00:21:57.220 "large_pool_count": 1024, 00:21:57.221 "small_bufsize": 8192, 00:21:57.221 "large_bufsize": 135168 00:21:57.221 } 00:21:57.221 } 00:21:57.221 ] 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "subsystem": "sock", 00:21:57.221 "config": [ 00:21:57.221 { 00:21:57.221 "method": "sock_set_default_impl", 00:21:57.221 "params": { 00:21:57.221 "impl_name": "posix" 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "sock_impl_set_options", 00:21:57.221 "params": { 00:21:57.221 "impl_name": "ssl", 00:21:57.221 "recv_buf_size": 4096, 00:21:57.221 "send_buf_size": 4096, 00:21:57.221 "enable_recv_pipe": true, 00:21:57.221 "enable_quickack": false, 00:21:57.221 "enable_placement_id": 0, 00:21:57.221 "enable_zerocopy_send_server": true, 00:21:57.221 "enable_zerocopy_send_client": false, 00:21:57.221 "zerocopy_threshold": 0, 00:21:57.221 "tls_version": 0, 00:21:57.221 "enable_ktls": false 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "sock_impl_set_options", 00:21:57.221 "params": { 00:21:57.221 "impl_name": "posix", 00:21:57.221 "recv_buf_size": 2097152, 00:21:57.221 "send_buf_size": 2097152, 00:21:57.221 "enable_recv_pipe": true, 00:21:57.221 "enable_quickack": false, 00:21:57.221 "enable_placement_id": 0, 00:21:57.221 "enable_zerocopy_send_server": true, 00:21:57.221 "enable_zerocopy_send_client": false, 00:21:57.221 "zerocopy_threshold": 0, 00:21:57.221 "tls_version": 0, 00:21:57.221 "enable_ktls": false 00:21:57.221 } 00:21:57.221 } 00:21:57.221 ] 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "subsystem": "vmd", 00:21:57.221 "config": [] 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "subsystem": "accel", 00:21:57.221 "config": [ 00:21:57.221 { 00:21:57.221 "method": "accel_set_options", 00:21:57.221 "params": { 00:21:57.221 "small_cache_size": 128, 00:21:57.221 "large_cache_size": 16, 00:21:57.221 "task_count": 2048, 00:21:57.221 "sequence_count": 2048, 00:21:57.221 "buf_count": 2048 00:21:57.221 } 00:21:57.221 } 00:21:57.221 ] 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "subsystem": "bdev", 00:21:57.221 "config": [ 00:21:57.221 { 00:21:57.221 "method": "bdev_set_options", 00:21:57.221 "params": { 00:21:57.221 "bdev_io_pool_size": 65535, 00:21:57.221 "bdev_io_cache_size": 256, 00:21:57.221 "bdev_auto_examine": true, 00:21:57.221 "iobuf_small_cache_size": 128, 00:21:57.221 "iobuf_large_cache_size": 16 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "bdev_raid_set_options", 00:21:57.221 "params": { 00:21:57.221 "process_window_size_kb": 1024, 00:21:57.221 "process_max_bandwidth_mb_sec": 0 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "bdev_iscsi_set_options", 00:21:57.221 "params": { 00:21:57.221 "timeout_sec": 30 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "bdev_nvme_set_options", 00:21:57.221 "params": { 00:21:57.221 "action_on_timeout": "none", 00:21:57.221 "timeout_us": 0, 00:21:57.221 "timeout_admin_us": 0, 00:21:57.221 "keep_alive_timeout_ms": 10000, 00:21:57.221 "arbitration_burst": 0, 00:21:57.221 "low_priority_weight": 0, 00:21:57.221 "medium_priority_weight": 0, 00:21:57.221 "high_priority_weight": 0, 00:21:57.221 "nvme_adminq_poll_period_us": 10000, 00:21:57.221 "nvme_ioq_poll_period_us": 0, 00:21:57.221 "io_queue_requests": 512, 00:21:57.221 "delay_cmd_submit": true, 00:21:57.221 "transport_retry_count": 4, 00:21:57.221 "bdev_retry_count": 3, 00:21:57.221 "transport_ack_timeout": 0, 00:21:57.221 "ctrlr_loss_timeout_sec": 0, 00:21:57.221 "reconnect_delay_sec": 0, 00:21:57.221 "fast_io_fail_timeout_sec": 0, 00:21:57.221 "disable_auto_failback": false, 00:21:57.221 "generate_uuids": false, 00:21:57.221 "transport_tos": 0, 00:21:57.221 "nvme_error_stat": false, 00:21:57.221 "rdma_srq_size": 0, 00:21:57.221 "io_path_stat": false, 00:21:57.221 "allow_accel_sequence": false, 00:21:57.221 "rdma_max_cq_size": 0, 00:21:57.221 "rdma_cm_event_timeout_ms": 0, 00:21:57.221 "dhchap_digests": [ 00:21:57.221 "sha256", 00:21:57.221 "sha384", 00:21:57.221 "sha512" 00:21:57.221 ], 00:21:57.221 "dhchap_dhgroups": [ 00:21:57.221 "null", 00:21:57.221 "ffdhe2048", 00:21:57.221 "ffdhe3072", 00:21:57.221 "ffdhe4096", 00:21:57.221 "ffdhe6144", 00:21:57.221 "ffdhe8192" 00:21:57.221 ] 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "bdev_nvme_attach_controller", 00:21:57.221 "params": { 00:21:57.221 "name": "nvme0", 00:21:57.221 "trtype": "TCP", 00:21:57.221 "adrfam": "IPv4", 00:21:57.221 "traddr": "10.0.0.2", 00:21:57.221 "trsvcid": "4420", 00:21:57.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.221 "prchk_reftag": false, 00:21:57.221 "prchk_guard": false, 00:21:57.221 "ctrlr_loss_timeout_sec": 0, 00:21:57.221 "reconnect_delay_sec": 0, 00:21:57.221 "fast_io_fail_timeout_sec": 0, 00:21:57.221 "psk": "key0", 00:21:57.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.221 "hdgst": false, 00:21:57.221 "ddgst": false 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "bdev_nvme_set_hotplug", 00:21:57.221 "params": { 00:21:57.221 "period_us": 100000, 00:21:57.221 "enable": false 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "bdev_enable_histogram", 00:21:57.221 "params": { 00:21:57.221 "name": "nvme0n1", 00:21:57.221 "enable": true 00:21:57.221 } 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "method": "bdev_wait_for_examine" 00:21:57.221 } 00:21:57.221 ] 00:21:57.221 }, 00:21:57.221 { 00:21:57.221 "subsystem": "nbd", 00:21:57.221 "config": [] 00:21:57.221 } 00:21:57.221 ] 00:21:57.221 }' 00:21:57.221 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1690353 00:21:57.221 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1690353 ']' 00:21:57.221 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1690353 00:21:57.221 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:57.221 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.221 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690353 00:21:57.480 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:57.480 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:57.480 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690353' 00:21:57.480 killing process with pid 1690353 00:21:57.480 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1690353 00:21:57.480 Received shutdown signal, test time was about 1.000000 seconds 00:21:57.480 00:21:57.480 Latency(us) 00:21:57.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.480 =================================================================================================================== 00:21:57.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:57.480 19:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1690353 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1690215 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1690215 ']' 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1690215 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690215 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690215' 00:21:57.738 killing process with pid 1690215 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1690215 00:21:57.738 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1690215 00:21:58.304 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:58.304 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.304 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:58.304 "subsystems": [ 00:21:58.304 { 00:21:58.304 "subsystem": "keyring", 00:21:58.304 "config": [ 00:21:58.304 { 00:21:58.304 "method": "keyring_file_add_key", 00:21:58.304 "params": { 00:21:58.304 "name": "key0", 00:21:58.304 "path": "/tmp/tmp.x35R8oNgyX" 00:21:58.304 } 00:21:58.304 } 00:21:58.304 ] 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "subsystem": "iobuf", 00:21:58.304 "config": [ 00:21:58.304 { 00:21:58.304 "method": "iobuf_set_options", 00:21:58.304 "params": { 00:21:58.304 "small_pool_count": 8192, 00:21:58.304 "large_pool_count": 1024, 00:21:58.304 "small_bufsize": 8192, 00:21:58.304 "large_bufsize": 135168 00:21:58.304 } 00:21:58.304 } 00:21:58.304 ] 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "subsystem": "sock", 00:21:58.304 "config": [ 00:21:58.304 { 00:21:58.304 "method": "sock_set_default_impl", 00:21:58.304 "params": { 00:21:58.304 "impl_name": "posix" 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "sock_impl_set_options", 00:21:58.304 "params": { 00:21:58.304 "impl_name": "ssl", 00:21:58.304 "recv_buf_size": 4096, 00:21:58.304 "send_buf_size": 4096, 00:21:58.304 "enable_recv_pipe": true, 00:21:58.304 "enable_quickack": false, 00:21:58.304 "enable_placement_id": 0, 00:21:58.304 "enable_zerocopy_send_server": true, 00:21:58.304 "enable_zerocopy_send_client": false, 00:21:58.304 "zerocopy_threshold": 0, 00:21:58.304 "tls_version": 0, 00:21:58.304 "enable_ktls": false 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "sock_impl_set_options", 00:21:58.304 "params": { 00:21:58.304 "impl_name": "posix", 00:21:58.304 "recv_buf_size": 2097152, 00:21:58.304 "send_buf_size": 2097152, 00:21:58.304 "enable_recv_pipe": true, 00:21:58.304 "enable_quickack": false, 00:21:58.304 "enable_placement_id": 0, 00:21:58.304 "enable_zerocopy_send_server": true, 00:21:58.304 "enable_zerocopy_send_client": false, 00:21:58.304 "zerocopy_threshold": 0, 00:21:58.304 "tls_version": 0, 00:21:58.304 "enable_ktls": false 00:21:58.304 } 00:21:58.304 } 00:21:58.304 ] 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "subsystem": "vmd", 00:21:58.304 "config": [] 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "subsystem": "accel", 00:21:58.304 "config": [ 00:21:58.304 { 00:21:58.304 "method": "accel_set_options", 00:21:58.304 "params": { 00:21:58.304 "small_cache_size": 128, 00:21:58.304 "large_cache_size": 16, 00:21:58.304 "task_count": 2048, 00:21:58.304 "sequence_count": 2048, 00:21:58.304 "buf_count": 2048 00:21:58.304 } 00:21:58.304 } 00:21:58.304 ] 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "subsystem": "bdev", 00:21:58.304 "config": [ 00:21:58.304 { 00:21:58.304 "method": "bdev_set_options", 00:21:58.304 "params": { 00:21:58.304 "bdev_io_pool_size": 65535, 00:21:58.304 "bdev_io_cache_size": 256, 00:21:58.304 "bdev_auto_examine": true, 00:21:58.304 "iobuf_small_cache_size": 128, 00:21:58.304 "iobuf_large_cache_size": 16 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "bdev_raid_set_options", 00:21:58.304 "params": { 00:21:58.304 "process_window_size_kb": 1024, 00:21:58.304 "process_max_bandwidth_mb_sec": 0 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "bdev_iscsi_set_options", 00:21:58.304 "params": { 00:21:58.304 "timeout_sec": 30 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "bdev_nvme_set_options", 00:21:58.304 "params": { 00:21:58.304 "action_on_timeout": "none", 00:21:58.304 "timeout_us": 0, 00:21:58.304 "timeout_admin_us": 0, 00:21:58.304 "keep_alive_timeout_ms": 10000, 00:21:58.304 "arbitration_burst": 0, 00:21:58.304 "low_priority_weight": 0, 00:21:58.304 "medium_priority_weight": 0, 00:21:58.304 "high_priority_weight": 0, 00:21:58.304 "nvme_adminq_poll_period_us": 10000, 00:21:58.304 "nvme_ioq_poll_period_us": 0, 00:21:58.304 "io_queue_requests": 0, 00:21:58.304 "delay_cmd_submit": true, 00:21:58.304 "transport_retry_count": 4, 00:21:58.304 "bdev_retry_count": 3, 00:21:58.304 "transport_ack_timeout": 0, 00:21:58.304 "ctrlr_loss_timeout_sec": 0, 00:21:58.304 "reconnect_delay_sec": 0, 00:21:58.304 "fast_io_fail_timeout_sec": 0, 00:21:58.304 "disable_auto_failback": false, 00:21:58.304 "generate_uuids": false, 00:21:58.304 "transport_tos": 0, 00:21:58.304 "nvme_error_stat": false, 00:21:58.304 "rdma_srq_size": 0, 00:21:58.304 "io_path_stat": false, 00:21:58.304 "allow_accel_sequence": false, 00:21:58.304 "rdma_max_cq_size": 0, 00:21:58.304 "rdma_cm_event_timeout_ms": 0, 00:21:58.304 "dhchap_digests": [ 00:21:58.304 "sha256", 00:21:58.304 "sha384", 00:21:58.304 "sha512" 00:21:58.304 ], 00:21:58.304 "dhchap_dhgroups": [ 00:21:58.304 "null", 00:21:58.304 "ffdhe2048", 00:21:58.304 "ffdhe3072", 00:21:58.304 "ffdhe4096", 00:21:58.304 "ffdhe6144", 00:21:58.304 "ffdhe8192" 00:21:58.304 ] 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "bdev_nvme_set_hotplug", 00:21:58.304 "params": { 00:21:58.304 "period_us": 100000, 00:21:58.304 "enable": false 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "bdev_malloc_create", 00:21:58.304 "params": { 00:21:58.304 "name": "malloc0", 00:21:58.304 "num_blocks": 8192, 00:21:58.304 "block_size": 4096, 00:21:58.304 "physical_block_size": 4096, 00:21:58.304 "uuid": "00297548-49ee-42fa-a5bc-f709c3ffc914", 00:21:58.304 "optimal_io_boundary": 0, 00:21:58.304 "md_size": 0, 00:21:58.304 "dif_type": 0, 00:21:58.304 "dif_is_head_of_md": false, 00:21:58.304 "dif_pi_format": 0 00:21:58.304 } 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "method": "bdev_wait_for_examine" 00:21:58.304 } 00:21:58.304 ] 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "subsystem": "nbd", 00:21:58.304 "config": [] 00:21:58.304 }, 00:21:58.304 { 00:21:58.304 "subsystem": "scheduler", 00:21:58.304 "config": [ 00:21:58.305 { 00:21:58.305 "method": "framework_set_scheduler", 00:21:58.305 "params": { 00:21:58.305 "name": "static" 00:21:58.305 } 00:21:58.305 } 00:21:58.305 ] 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "subsystem": "nvmf", 00:21:58.305 "config": [ 00:21:58.305 { 00:21:58.305 "method": "nvmf_set_config", 00:21:58.305 "params": { 00:21:58.305 "discovery_filter": "match_any", 00:21:58.305 "admin_cmd_passthru": { 00:21:58.305 "identify_ctrlr": false 00:21:58.305 } 00:21:58.305 } 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "method": "nvmf_set_max_subsystems", 00:21:58.305 "params": { 00:21:58.305 "max_subsystems": 1024 00:21:58.305 } 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "method": "nvmf_set_crdt", 00:21:58.305 "params": { 00:21:58.305 "crdt1": 0, 00:21:58.305 "crdt2": 0, 00:21:58.305 "crdt3": 0 00:21:58.305 } 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "method": "nvmf_create_transport", 00:21:58.305 "params": { 00:21:58.305 "trtype": "TCP", 00:21:58.305 "max_queue_depth": 128, 00:21:58.305 "max_io_qpairs_per_ctrlr": 127, 00:21:58.305 "in_capsule_data_size": 4096, 00:21:58.305 "max_io_size": 131072, 00:21:58.305 "io_unit_size": 131072, 00:21:58.305 "max_aq_depth": 128, 00:21:58.305 "num_shared_buffers": 511, 00:21:58.305 "buf_cache_size": 4294967295, 00:21:58.305 "dif_insert_or_strip": false, 00:21:58.305 "zcopy": false, 00:21:58.305 "c2h_success": false, 00:21:58.305 "sock_priority": 0, 00:21:58.305 "abort_timeout_sec": 1, 00:21:58.305 "ack_timeout": 0, 00:21:58.305 "data_wr_pool_size": 0 00:21:58.305 } 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "method": "nvmf_create_subsystem", 00:21:58.305 "params": { 00:21:58.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.305 "allow_any_host": false, 00:21:58.305 "serial_number": "00000000000000000000", 00:21:58.305 "model_number": "SPDK bdev Controller", 00:21:58.305 "max_namespaces": 32, 00:21:58.305 "min_cntlid": 1, 00:21:58.305 "max_cntlid": 65519, 00:21:58.305 "ana_reporting": false 00:21:58.305 } 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "method": "nvmf_subsystem_add_host", 00:21:58.305 "params": { 00:21:58.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.305 "host": "nqn.2016-06.io.spdk:host1", 00:21:58.305 "psk": "key0" 00:21:58.305 } 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "method": "nvmf_subsystem_add_ns", 00:21:58.305 "params": { 00:21:58.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.305 "namespace": { 00:21:58.305 "nsid": 1, 00:21:58.305 "bdev_name": "malloc0", 00:21:58.305 "nguid": "0029754849EE42FAA5BCF709C3FFC914", 00:21:58.305 "uuid": "00297548-49ee-42fa-a5bc-f709c3ffc914", 00:21:58.305 "no_auto_visible": false 00:21:58.305 } 00:21:58.305 } 00:21:58.305 }, 00:21:58.305 { 00:21:58.305 "method": "nvmf_subsystem_add_listener", 00:21:58.305 "params": { 00:21:58.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.305 "listen_address": { 00:21:58.305 "trtype": "TCP", 00:21:58.305 "adrfam": "IPv4", 00:21:58.305 "traddr": "10.0.0.2", 00:21:58.305 "trsvcid": "4420" 00:21:58.305 }, 00:21:58.305 "secure_channel": false, 00:21:58.305 "sock_impl": "ssl" 00:21:58.305 } 00:21:58.305 } 00:21:58.305 ] 00:21:58.305 } 00:21:58.305 ] 00:21:58.305 }' 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1690763 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1690763 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1690763 ']' 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:58.305 19:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.305 [2024-07-24 19:14:03.800918] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:58.305 [2024-07-24 19:14:03.801024] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.305 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.305 [2024-07-24 19:14:03.894859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.564 [2024-07-24 19:14:04.036824] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.564 [2024-07-24 19:14:04.036896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.564 [2024-07-24 19:14:04.036917] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.564 [2024-07-24 19:14:04.036935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.564 [2024-07-24 19:14:04.036950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.564 [2024-07-24 19:14:04.037052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.824 [2024-07-24 19:14:04.350031] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.824 [2024-07-24 19:14:04.397633] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.824 [2024-07-24 19:14:04.398070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1690917 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1690917 /var/tmp/bdevperf.sock 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1690917 ']' 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.759 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:59.759 "subsystems": [ 00:21:59.759 { 00:21:59.759 "subsystem": "keyring", 00:21:59.759 "config": [ 00:21:59.759 { 00:21:59.759 "method": "keyring_file_add_key", 00:21:59.759 "params": { 00:21:59.759 "name": "key0", 00:21:59.759 "path": "/tmp/tmp.x35R8oNgyX" 00:21:59.759 } 00:21:59.759 } 00:21:59.759 ] 00:21:59.759 }, 00:21:59.759 { 00:21:59.759 "subsystem": "iobuf", 00:21:59.759 "config": [ 00:21:59.759 { 00:21:59.759 "method": "iobuf_set_options", 00:21:59.759 "params": { 00:21:59.759 "small_pool_count": 8192, 00:21:59.759 "large_pool_count": 1024, 00:21:59.759 "small_bufsize": 8192, 00:21:59.759 "large_bufsize": 135168 00:21:59.759 } 00:21:59.759 } 00:21:59.759 ] 00:21:59.759 }, 00:21:59.759 { 00:21:59.759 "subsystem": "sock", 00:21:59.759 "config": [ 00:21:59.759 { 00:21:59.759 "method": "sock_set_default_impl", 00:21:59.759 "params": { 00:21:59.759 "impl_name": "posix" 00:21:59.759 } 00:21:59.759 }, 00:21:59.759 { 00:21:59.759 "method": "sock_impl_set_options", 00:21:59.759 "params": { 00:21:59.759 "impl_name": "ssl", 00:21:59.759 "recv_buf_size": 4096, 00:21:59.759 "send_buf_size": 4096, 00:21:59.759 "enable_recv_pipe": true, 00:21:59.759 "enable_quickack": false, 00:21:59.759 "enable_placement_id": 0, 00:21:59.760 "enable_zerocopy_send_server": true, 00:21:59.760 "enable_zerocopy_send_client": false, 00:21:59.760 "zerocopy_threshold": 0, 00:21:59.760 "tls_version": 0, 00:21:59.760 "enable_ktls": false 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "sock_impl_set_options", 00:21:59.760 "params": { 00:21:59.760 "impl_name": "posix", 00:21:59.760 "recv_buf_size": 2097152, 00:21:59.760 "send_buf_size": 2097152, 00:21:59.760 "enable_recv_pipe": true, 00:21:59.760 "enable_quickack": false, 00:21:59.760 "enable_placement_id": 0, 00:21:59.760 "enable_zerocopy_send_server": true, 00:21:59.760 "enable_zerocopy_send_client": false, 00:21:59.760 "zerocopy_threshold": 0, 00:21:59.760 "tls_version": 0, 00:21:59.760 "enable_ktls": false 00:21:59.760 } 00:21:59.760 } 00:21:59.760 ] 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "subsystem": "vmd", 00:21:59.760 "config": [] 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "subsystem": "accel", 00:21:59.760 "config": [ 00:21:59.760 { 00:21:59.760 "method": "accel_set_options", 00:21:59.760 "params": { 00:21:59.760 "small_cache_size": 128, 00:21:59.760 "large_cache_size": 16, 00:21:59.760 "task_count": 2048, 00:21:59.760 "sequence_count": 2048, 00:21:59.760 "buf_count": 2048 00:21:59.760 } 00:21:59.760 } 00:21:59.760 ] 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "subsystem": "bdev", 00:21:59.760 "config": [ 00:21:59.760 { 00:21:59.760 "method": "bdev_set_options", 00:21:59.760 "params": { 00:21:59.760 "bdev_io_pool_size": 65535, 00:21:59.760 "bdev_io_cache_size": 256, 00:21:59.760 "bdev_auto_examine": true, 00:21:59.760 "iobuf_small_cache_size": 128, 00:21:59.760 "iobuf_large_cache_size": 16 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "bdev_raid_set_options", 00:21:59.760 "params": { 00:21:59.760 "process_window_size_kb": 1024, 00:21:59.760 "process_max_bandwidth_mb_sec": 0 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "bdev_iscsi_set_options", 00:21:59.760 "params": { 00:21:59.760 "timeout_sec": 30 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "bdev_nvme_set_options", 00:21:59.760 "params": { 00:21:59.760 "action_on_timeout": "none", 00:21:59.760 "timeout_us": 0, 00:21:59.760 "timeout_admin_us": 0, 00:21:59.760 "keep_alive_timeout_ms": 10000, 00:21:59.760 "arbitration_burst": 0, 00:21:59.760 "low_priority_weight": 0, 00:21:59.760 "medium_priority_weight": 0, 00:21:59.760 "high_priority_weight": 0, 00:21:59.760 "nvme_adminq_poll_period_us": 10000, 00:21:59.760 "nvme_ioq_poll_period_us": 0, 00:21:59.760 "io_queue_requests": 512, 00:21:59.760 "delay_cmd_submit": true, 00:21:59.760 "transport_retry_count": 4, 00:21:59.760 "bdev_retry_count": 3, 00:21:59.760 "transport_ack_timeout": 0, 00:21:59.760 "ctrlr_loss_timeout_sec": 0, 00:21:59.760 "reconnect_delay_sec": 0, 00:21:59.760 "fast_io_fail_timeout_sec": 0, 00:21:59.760 "disable_auto_failback": false, 00:21:59.760 "generate_uuids": false, 00:21:59.760 "transport_tos": 0, 00:21:59.760 "nvme_error_stat": false, 00:21:59.760 "rdma_srq_size": 0, 00:21:59.760 "io_path_stat": false, 00:21:59.760 "allow_accel_sequence": false, 00:21:59.760 "rdma_max_cq_size": 0, 00:21:59.760 "rdma_cm_event_timeout_ms": 0, 00:21:59.760 "dhchap_digests": [ 00:21:59.760 "sha256", 00:21:59.760 "sha384", 00:21:59.760 "sha512" 00:21:59.760 ], 00:21:59.760 "dhchap_dhgroups": [ 00:21:59.760 "null", 00:21:59.760 "ffdhe2048", 00:21:59.760 "ffdhe3072", 00:21:59.760 "ffdhe4096", 00:21:59.760 "ffdhe6144", 00:21:59.760 "ffdhe8192" 00:21:59.760 ] 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "bdev_nvme_attach_controller", 00:21:59.760 "params": { 00:21:59.760 "name": "nvme0", 00:21:59.760 "trtype": "TCP", 00:21:59.760 "adrfam": "IPv4", 00:21:59.760 "traddr": "10.0.0.2", 00:21:59.760 "trsvcid": "4420", 00:21:59.760 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.760 "prchk_reftag": false, 00:21:59.760 "prchk_guard": false, 00:21:59.760 "ctrlr_loss_timeout_sec": 0, 00:21:59.760 "reconnect_delay_sec": 0, 00:21:59.760 "fast_io_fail_timeout_sec": 0, 00:21:59.760 "psk": "key0", 00:21:59.760 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.760 "hdgst": false, 00:21:59.760 "ddgst": false 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "bdev_nvme_set_hotplug", 00:21:59.760 "params": { 00:21:59.760 "period_us": 100000, 00:21:59.760 "enable": false 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "bdev_enable_histogram", 00:21:59.760 "params": { 00:21:59.760 "name": "nvme0n1", 00:21:59.760 "enable": true 00:21:59.760 } 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "method": "bdev_wait_for_examine" 00:21:59.760 } 00:21:59.760 ] 00:21:59.760 }, 00:21:59.760 { 00:21:59.760 "subsystem": "nbd", 00:21:59.760 "config": [] 00:21:59.760 } 00:21:59.760 ] 00:21:59.760 }' 00:21:59.760 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.760 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.760 19:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.760 [2024-07-24 19:14:05.288068] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:21:59.760 [2024-07-24 19:14:05.288163] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690917 ] 00:21:59.760 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.760 [2024-07-24 19:14:05.364208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.019 [2024-07-24 19:14:05.503122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.019 [2024-07-24 19:14:05.699682] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.953 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.953 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:00.953 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.953 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:22:01.211 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.211 19:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.470 Running I/O for 1 seconds... 00:22:02.405 00:22:02.405 Latency(us) 00:22:02.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.405 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:02.405 Verification LBA range: start 0x0 length 0x2000 00:22:02.405 nvme0n1 : 1.03 2418.70 9.45 0.00 0.00 52197.34 8252.68 62914.56 00:22:02.405 =================================================================================================================== 00:22:02.405 Total : 2418.70 9.45 0.00 0.00 52197.34 8252.68 62914.56 00:22:02.405 0 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:02.405 19:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:02.405 nvmf_trace.0 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1690917 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1690917 ']' 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1690917 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690917 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690917' 00:22:02.405 killing process with pid 1690917 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1690917 00:22:02.405 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.405 00:22:02.405 Latency(us) 00:22:02.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.405 =================================================================================================================== 00:22:02.405 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.405 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1690917 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.976 rmmod nvme_tcp 00:22:02.976 rmmod nvme_fabrics 00:22:02.976 rmmod nvme_keyring 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1690763 ']' 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1690763 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1690763 ']' 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1690763 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1690763 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:02.976 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1690763' 00:22:02.977 killing process with pid 1690763 00:22:02.977 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1690763 00:22:02.977 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1690763 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.237 19:14:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.812 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.812 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ysHQ3YSEE0 /tmp/tmp.wAydRVDiOo /tmp/tmp.x35R8oNgyX 00:22:05.812 00:22:05.812 real 1m35.348s 00:22:05.812 user 2m39.437s 00:22:05.812 sys 0m29.128s 00:22:05.812 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:05.812 19:14:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.812 ************************************ 00:22:05.812 END TEST nvmf_tls 00:22:05.812 ************************************ 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:05.812 ************************************ 00:22:05.812 START TEST nvmf_fips 00:22:05.812 ************************************ 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:05.812 * Looking for test storage... 00:22:05.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.812 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:05.813 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:05.814 Error setting digest 00:22:05.814 00F237A1787F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:05.814 00F237A1787F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.814 19:14:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.358 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:08.359 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:08.359 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:08.359 Found net devices under 0000:84:00.0: cvl_0_0 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:08.359 Found net devices under 0000:84:00.1: cvl_0_1 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.359 19:14:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.359 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.359 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.359 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:08.359 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:08.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:08.619 00:22:08.619 --- 10.0.0.2 ping statistics --- 00:22:08.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.619 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:22:08.619 00:22:08.619 --- 10.0.0.1 ping statistics --- 00:22:08.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.619 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1693417 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1693417 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1693417 ']' 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.619 19:14:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.879 [2024-07-24 19:14:14.318741] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:22:08.879 [2024-07-24 19:14:14.318875] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.879 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.879 [2024-07-24 19:14:14.448669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.137 [2024-07-24 19:14:14.591123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.137 [2024-07-24 19:14:14.591182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.137 [2024-07-24 19:14:14.591202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.137 [2024-07-24 19:14:14.591218] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.137 [2024-07-24 19:14:14.591232] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.137 [2024-07-24 19:14:14.591276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:10.071 19:14:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:10.329 [2024-07-24 19:14:15.986750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.329 [2024-07-24 19:14:16.002709] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:10.329 [2024-07-24 19:14:16.002988] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.588 [2024-07-24 19:14:16.036226] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:10.588 malloc0 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1693699 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1693699 /var/tmp/bdevperf.sock 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1693699 ']' 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:10.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.588 19:14:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:10.588 [2024-07-24 19:14:16.158859] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:22:10.588 [2024-07-24 19:14:16.158957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693699 ] 00:22:10.588 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.588 [2024-07-24 19:14:16.240150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.846 [2024-07-24 19:14:16.380147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.778 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.778 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:11.778 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:12.036 [2024-07-24 19:14:17.705568] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.036 [2024-07-24 19:14:17.705720] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:12.294 TLSTESTn1 00:22:12.294 19:14:17 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.294 Running I/O for 10 seconds... 00:22:24.508 00:22:24.508 Latency(us) 00:22:24.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.508 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:24.508 Verification LBA range: start 0x0 length 0x2000 00:22:24.508 TLSTESTn1 : 10.03 2589.14 10.11 0.00 0.00 49332.14 8592.50 51263.72 00:22:24.508 =================================================================================================================== 00:22:24.508 Total : 2589.14 10.11 0.00 0.00 49332.14 8592.50 51263.72 00:22:24.508 0 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:24.508 nvmf_trace.0 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1693699 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1693699 ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1693699 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693699 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693699' 00:22:24.508 killing process with pid 1693699 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1693699 00:22:24.508 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.508 00:22:24.508 Latency(us) 00:22:24.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.508 =================================================================================================================== 00:22:24.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:24.508 [2024-07-24 19:14:28.144114] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1693699 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:24.508 rmmod nvme_tcp 00:22:24.508 rmmod nvme_fabrics 00:22:24.508 rmmod nvme_keyring 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1693417 ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1693417 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1693417 ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1693417 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693417 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:24.508 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693417' 00:22:24.509 killing process with pid 1693417 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1693417 00:22:24.509 [2024-07-24 19:14:28.570232] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1693417 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.509 19:14:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.444 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.444 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:25.444 00:22:25.444 real 0m19.939s 00:22:25.444 user 0m26.250s 00:22:25.444 sys 0m6.717s 00:22:25.444 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.444 19:14:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:25.444 ************************************ 00:22:25.444 END TEST nvmf_fips 00:22:25.444 ************************************ 00:22:25.444 19:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:25.444 19:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:25.444 19:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:25.444 19:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:25.444 19:14:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.444 19:14:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:28.728 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:28.728 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.728 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:28.729 Found net devices under 0000:84:00.0: cvl_0_0 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:28.729 Found net devices under 0000:84:00.1: cvl_0_1 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:28.729 ************************************ 00:22:28.729 START TEST nvmf_perf_adq 00:22:28.729 ************************************ 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:28.729 * Looking for test storage... 00:22:28.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.729 19:14:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:31.290 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:31.290 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:31.290 Found net devices under 0000:84:00.0: cvl_0_0 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:31.290 Found net devices under 0000:84:00.1: cvl_0_1 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:31.290 19:14:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.861 19:14:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:33.767 19:14:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:39.038 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:39.039 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:39.039 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:39.039 Found net devices under 0000:84:00.0: cvl_0_0 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:39.039 Found net devices under 0000:84:00.1: cvl_0_1 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.039 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:39.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:22:39.039 00:22:39.040 --- 10.0.0.2 ping statistics --- 00:22:39.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.040 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:22:39.040 00:22:39.040 --- 10.0.0.1 ping statistics --- 00:22:39.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.040 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1699734 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1699734 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1699734 ']' 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.040 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.040 [2024-07-24 19:14:44.577267] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:22:39.040 [2024-07-24 19:14:44.577380] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.040 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.040 [2024-07-24 19:14:44.688864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.298 [2024-07-24 19:14:44.893740] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.298 [2024-07-24 19:14:44.893850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.298 [2024-07-24 19:14:44.893886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.298 [2024-07-24 19:14:44.893915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.298 [2024-07-24 19:14:44.893941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.298 [2024-07-24 19:14:44.894079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.298 [2024-07-24 19:14:44.894142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.298 [2024-07-24 19:14:44.894202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.298 [2024-07-24 19:14:44.894206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.298 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.298 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:39.298 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.298 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.298 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.557 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:39.557 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.557 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.557 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 [2024-07-24 19:14:45.195568] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 Malloc1 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.557 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.557 [2024-07-24 19:14:45.252295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.816 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.816 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1699832 00:22:39.816 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.816 19:14:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:39.816 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.718 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:41.719 "tick_rate": 2700000000, 00:22:41.719 "poll_groups": [ 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_000", 00:22:41.719 "admin_qpairs": 1, 00:22:41.719 "io_qpairs": 1, 00:22:41.719 "current_admin_qpairs": 1, 00:22:41.719 "current_io_qpairs": 1, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 15523, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }, 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_001", 00:22:41.719 "admin_qpairs": 0, 00:22:41.719 "io_qpairs": 1, 00:22:41.719 "current_admin_qpairs": 0, 00:22:41.719 "current_io_qpairs": 1, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 15570, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }, 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_002", 00:22:41.719 "admin_qpairs": 0, 00:22:41.719 "io_qpairs": 1, 00:22:41.719 "current_admin_qpairs": 0, 00:22:41.719 "current_io_qpairs": 1, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 15809, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }, 00:22:41.719 { 00:22:41.719 "name": "nvmf_tgt_poll_group_003", 00:22:41.719 "admin_qpairs": 0, 00:22:41.719 "io_qpairs": 1, 00:22:41.719 "current_admin_qpairs": 0, 00:22:41.719 "current_io_qpairs": 1, 00:22:41.719 "pending_bdev_io": 0, 00:22:41.719 "completed_nvme_io": 15378, 00:22:41.719 "transports": [ 00:22:41.719 { 00:22:41.719 "trtype": "TCP" 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 } 00:22:41.719 ] 00:22:41.719 }' 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:41.719 19:14:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1699832 00:22:49.839 Initializing NVMe Controllers 00:22:49.839 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:49.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:49.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:49.839 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:49.839 Initialization complete. Launching workers. 00:22:49.839 ======================================================== 00:22:49.839 Latency(us) 00:22:49.839 Device Information : IOPS MiB/s Average min max 00:22:49.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 8267.60 32.30 7742.99 4087.11 10670.10 00:22:49.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8213.30 32.08 7793.64 4190.93 10046.44 00:22:49.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8169.80 31.91 7833.94 4145.39 11005.33 00:22:49.839 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8090.00 31.60 7911.68 4139.33 10090.13 00:22:49.839 ======================================================== 00:22:49.839 Total : 32740.69 127.89 7820.07 4087.11 11005.33 00:22:49.839 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:49.839 rmmod nvme_tcp 00:22:49.839 rmmod nvme_fabrics 00:22:49.839 rmmod nvme_keyring 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1699734 ']' 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1699734 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1699734 ']' 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1699734 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1699734 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1699734' 00:22:49.839 killing process with pid 1699734 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1699734 00:22:49.839 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1699734 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.407 19:14:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:52.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:52.312 19:14:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:53.248 19:14:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:55.165 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:00.447 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:00.447 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.447 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:00.448 Found net devices under 0000:84:00.0: cvl_0_0 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:00.448 Found net devices under 0000:84:00.1: cvl_0_1 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.448 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:23:00.448 00:23:00.448 --- 10.0.0.2 ping statistics --- 00:23:00.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.448 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:23:00.448 00:23:00.448 --- 10.0.0.1 ping statistics --- 00:23:00.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.448 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:00.448 net.core.busy_poll = 1 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:00.448 net.core.busy_read = 1 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:00.448 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1702491 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1702491 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1702491 ']' 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.708 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.708 [2024-07-24 19:15:06.340049] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:00.708 [2024-07-24 19:15:06.340148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.966 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.966 [2024-07-24 19:15:06.469155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.224 [2024-07-24 19:15:06.665391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.224 [2024-07-24 19:15:06.665511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.224 [2024-07-24 19:15:06.665559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.224 [2024-07-24 19:15:06.665590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.224 [2024-07-24 19:15:06.665615] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.224 [2024-07-24 19:15:06.665733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.224 [2024-07-24 19:15:06.665795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.224 [2024-07-24 19:15:06.665851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.224 [2024-07-24 19:15:06.665855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 [2024-07-24 19:15:07.808217] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 Malloc1 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.162 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.421 [2024-07-24 19:15:07.864525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1702782 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:02.421 19:15:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:02.421 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.329 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:04.329 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.329 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:04.329 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.329 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:04.329 "tick_rate": 2700000000, 00:23:04.329 "poll_groups": [ 00:23:04.329 { 00:23:04.329 "name": "nvmf_tgt_poll_group_000", 00:23:04.329 "admin_qpairs": 1, 00:23:04.329 "io_qpairs": 2, 00:23:04.329 "current_admin_qpairs": 1, 00:23:04.330 "current_io_qpairs": 2, 00:23:04.330 "pending_bdev_io": 0, 00:23:04.330 "completed_nvme_io": 20121, 00:23:04.330 "transports": [ 00:23:04.330 { 00:23:04.330 "trtype": "TCP" 00:23:04.330 } 00:23:04.330 ] 00:23:04.330 }, 00:23:04.330 { 00:23:04.330 "name": "nvmf_tgt_poll_group_001", 00:23:04.330 "admin_qpairs": 0, 00:23:04.330 "io_qpairs": 2, 00:23:04.330 "current_admin_qpairs": 0, 00:23:04.330 "current_io_qpairs": 2, 00:23:04.330 "pending_bdev_io": 0, 00:23:04.330 "completed_nvme_io": 20369, 00:23:04.330 "transports": [ 00:23:04.330 { 00:23:04.330 "trtype": "TCP" 00:23:04.330 } 00:23:04.330 ] 00:23:04.330 }, 00:23:04.330 { 00:23:04.330 "name": "nvmf_tgt_poll_group_002", 00:23:04.330 "admin_qpairs": 0, 00:23:04.330 "io_qpairs": 0, 00:23:04.330 "current_admin_qpairs": 0, 00:23:04.330 "current_io_qpairs": 0, 00:23:04.330 "pending_bdev_io": 0, 00:23:04.330 "completed_nvme_io": 0, 00:23:04.330 "transports": [ 00:23:04.330 { 00:23:04.330 "trtype": "TCP" 00:23:04.330 } 00:23:04.330 ] 00:23:04.330 }, 00:23:04.330 { 00:23:04.330 "name": "nvmf_tgt_poll_group_003", 00:23:04.330 "admin_qpairs": 0, 00:23:04.330 "io_qpairs": 0, 00:23:04.330 "current_admin_qpairs": 0, 00:23:04.330 "current_io_qpairs": 0, 00:23:04.330 "pending_bdev_io": 0, 00:23:04.330 "completed_nvme_io": 0, 00:23:04.330 "transports": [ 00:23:04.330 { 00:23:04.330 "trtype": "TCP" 00:23:04.330 } 00:23:04.330 ] 00:23:04.330 } 00:23:04.330 ] 00:23:04.330 }' 00:23:04.330 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:04.330 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:04.330 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:04.330 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:04.330 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1702782 00:23:12.457 Initializing NVMe Controllers 00:23:12.457 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:12.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:12.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:12.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:12.457 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:12.457 Initialization complete. Launching workers. 00:23:12.457 ======================================================== 00:23:12.457 Latency(us) 00:23:12.457 Device Information : IOPS MiB/s Average min max 00:23:12.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5300.40 20.70 12075.37 2250.09 58103.45 00:23:12.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5489.60 21.44 11660.67 2208.25 57401.28 00:23:12.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5399.10 21.09 11853.66 2244.92 58617.99 00:23:12.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5126.80 20.03 12488.10 2485.02 58254.31 00:23:12.457 ======================================================== 00:23:12.457 Total : 21315.89 83.27 12011.68 2208.25 58617.99 00:23:12.457 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.457 rmmod nvme_tcp 00:23:12.457 rmmod nvme_fabrics 00:23:12.457 rmmod nvme_keyring 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1702491 ']' 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1702491 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1702491 ']' 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1702491 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1702491 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1702491' 00:23:12.457 killing process with pid 1702491 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1702491 00:23:12.457 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1702491 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.025 19:15:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:16.316 00:23:16.316 real 0m47.771s 00:23:16.316 user 2m45.654s 00:23:16.316 sys 0m10.534s 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:16.316 ************************************ 00:23:16.316 END TEST nvmf_perf_adq 00:23:16.316 ************************************ 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:16.316 ************************************ 00:23:16.316 START TEST nvmf_shutdown 00:23:16.316 ************************************ 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:16.316 * Looking for test storage... 00:23:16.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:16.316 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.317 ************************************ 00:23:16.317 START TEST nvmf_shutdown_tc1 00:23:16.317 ************************************ 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:16.317 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.853 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.853 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.853 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.853 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:18.854 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:18.854 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:18.854 Found net devices under 0000:84:00.0: cvl_0_0 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:18.854 Found net devices under 0000:84:00.1: cvl_0_1 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.854 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:19.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:23:19.117 00:23:19.117 --- 10.0.0.2 ping statistics --- 00:23:19.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.117 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:23:19.117 00:23:19.117 --- 10.0.0.1 ping statistics --- 00:23:19.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.117 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1706582 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1706582 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1706582 ']' 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.117 19:15:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.117 [2024-07-24 19:15:24.768718] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:19.117 [2024-07-24 19:15:24.768809] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.117 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.395 [2024-07-24 19:15:24.856187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:19.395 [2024-07-24 19:15:25.000390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.395 [2024-07-24 19:15:25.000485] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.395 [2024-07-24 19:15:25.000508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.395 [2024-07-24 19:15:25.000525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.395 [2024-07-24 19:15:25.000539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.395 [2024-07-24 19:15:25.000618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.395 [2024-07-24 19:15:25.000679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.395 [2024-07-24 19:15:25.000739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:19.395 [2024-07-24 19:15:25.000744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.664 [2024-07-24 19:15:25.189804] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.664 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.664 Malloc1 00:23:19.664 [2024-07-24 19:15:25.278978] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.664 Malloc2 00:23:19.922 Malloc3 00:23:19.922 Malloc4 00:23:19.922 Malloc5 00:23:19.922 Malloc6 00:23:19.922 Malloc7 00:23:20.181 Malloc8 00:23:20.181 Malloc9 00:23:20.181 Malloc10 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1706765 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1706765 /var/tmp/bdevperf.sock 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1706765 ']' 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.181 { 00:23:20.181 "params": { 00:23:20.181 "name": "Nvme$subsystem", 00:23:20.181 "trtype": "$TEST_TRANSPORT", 00:23:20.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.181 "adrfam": "ipv4", 00:23:20.181 "trsvcid": "$NVMF_PORT", 00:23:20.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.181 "hdgst": ${hdgst:-false}, 00:23:20.181 "ddgst": ${ddgst:-false} 00:23:20.181 }, 00:23:20.181 "method": "bdev_nvme_attach_controller" 00:23:20.181 } 00:23:20.181 EOF 00:23:20.181 )") 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.181 { 00:23:20.181 "params": { 00:23:20.181 "name": "Nvme$subsystem", 00:23:20.181 "trtype": "$TEST_TRANSPORT", 00:23:20.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.181 "adrfam": "ipv4", 00:23:20.181 "trsvcid": "$NVMF_PORT", 00:23:20.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.181 "hdgst": ${hdgst:-false}, 00:23:20.181 "ddgst": ${ddgst:-false} 00:23:20.181 }, 00:23:20.181 "method": "bdev_nvme_attach_controller" 00:23:20.181 } 00:23:20.181 EOF 00:23:20.181 )") 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.181 { 00:23:20.181 "params": { 00:23:20.181 "name": "Nvme$subsystem", 00:23:20.181 "trtype": "$TEST_TRANSPORT", 00:23:20.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.181 "adrfam": "ipv4", 00:23:20.181 "trsvcid": "$NVMF_PORT", 00:23:20.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.181 "hdgst": ${hdgst:-false}, 00:23:20.181 "ddgst": ${ddgst:-false} 00:23:20.181 }, 00:23:20.181 "method": "bdev_nvme_attach_controller" 00:23:20.181 } 00:23:20.181 EOF 00:23:20.181 )") 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.181 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.181 { 00:23:20.181 "params": { 00:23:20.182 "name": "Nvme$subsystem", 00:23:20.182 "trtype": "$TEST_TRANSPORT", 00:23:20.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "$NVMF_PORT", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.182 "hdgst": ${hdgst:-false}, 00:23:20.182 "ddgst": ${ddgst:-false} 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 } 00:23:20.182 EOF 00:23:20.182 )") 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.182 { 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme$subsystem", 00:23:20.182 "trtype": "$TEST_TRANSPORT", 00:23:20.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "$NVMF_PORT", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.182 "hdgst": ${hdgst:-false}, 00:23:20.182 "ddgst": ${ddgst:-false} 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 } 00:23:20.182 EOF 00:23:20.182 )") 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.182 { 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme$subsystem", 00:23:20.182 "trtype": "$TEST_TRANSPORT", 00:23:20.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "$NVMF_PORT", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.182 "hdgst": ${hdgst:-false}, 00:23:20.182 "ddgst": ${ddgst:-false} 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 } 00:23:20.182 EOF 00:23:20.182 )") 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.182 { 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme$subsystem", 00:23:20.182 "trtype": "$TEST_TRANSPORT", 00:23:20.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "$NVMF_PORT", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.182 "hdgst": ${hdgst:-false}, 00:23:20.182 "ddgst": ${ddgst:-false} 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 } 00:23:20.182 EOF 00:23:20.182 )") 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.182 { 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme$subsystem", 00:23:20.182 "trtype": "$TEST_TRANSPORT", 00:23:20.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "$NVMF_PORT", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.182 "hdgst": ${hdgst:-false}, 00:23:20.182 "ddgst": ${ddgst:-false} 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 } 00:23:20.182 EOF 00:23:20.182 )") 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.182 { 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme$subsystem", 00:23:20.182 "trtype": "$TEST_TRANSPORT", 00:23:20.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "$NVMF_PORT", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.182 "hdgst": ${hdgst:-false}, 00:23:20.182 "ddgst": ${ddgst:-false} 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 } 00:23:20.182 EOF 00:23:20.182 )") 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:20.182 { 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme$subsystem", 00:23:20.182 "trtype": "$TEST_TRANSPORT", 00:23:20.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "$NVMF_PORT", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.182 "hdgst": ${hdgst:-false}, 00:23:20.182 "ddgst": ${ddgst:-false} 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 } 00:23:20.182 EOF 00:23:20.182 )") 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:20.182 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme1", 00:23:20.182 "trtype": "tcp", 00:23:20.182 "traddr": "10.0.0.2", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "4420", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.182 "hdgst": false, 00:23:20.182 "ddgst": false 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 },{ 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme2", 00:23:20.182 "trtype": "tcp", 00:23:20.182 "traddr": "10.0.0.2", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "4420", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:20.182 "hdgst": false, 00:23:20.182 "ddgst": false 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 },{ 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme3", 00:23:20.182 "trtype": "tcp", 00:23:20.182 "traddr": "10.0.0.2", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "4420", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:20.182 "hdgst": false, 00:23:20.182 "ddgst": false 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 },{ 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme4", 00:23:20.182 "trtype": "tcp", 00:23:20.182 "traddr": "10.0.0.2", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "4420", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:20.182 "hdgst": false, 00:23:20.182 "ddgst": false 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 },{ 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme5", 00:23:20.182 "trtype": "tcp", 00:23:20.182 "traddr": "10.0.0.2", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "4420", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:20.182 "hdgst": false, 00:23:20.182 "ddgst": false 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 },{ 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme6", 00:23:20.182 "trtype": "tcp", 00:23:20.182 "traddr": "10.0.0.2", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "4420", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:20.182 "hdgst": false, 00:23:20.182 "ddgst": false 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.182 },{ 00:23:20.182 "params": { 00:23:20.182 "name": "Nvme7", 00:23:20.182 "trtype": "tcp", 00:23:20.182 "traddr": "10.0.0.2", 00:23:20.182 "adrfam": "ipv4", 00:23:20.182 "trsvcid": "4420", 00:23:20.182 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:20.182 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:20.182 "hdgst": false, 00:23:20.182 "ddgst": false 00:23:20.182 }, 00:23:20.182 "method": "bdev_nvme_attach_controller" 00:23:20.183 },{ 00:23:20.183 "params": { 00:23:20.183 "name": "Nvme8", 00:23:20.183 "trtype": "tcp", 00:23:20.183 "traddr": "10.0.0.2", 00:23:20.183 "adrfam": "ipv4", 00:23:20.183 "trsvcid": "4420", 00:23:20.183 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:20.183 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:20.183 "hdgst": false, 00:23:20.183 "ddgst": false 00:23:20.183 }, 00:23:20.183 "method": "bdev_nvme_attach_controller" 00:23:20.183 },{ 00:23:20.183 "params": { 00:23:20.183 "name": "Nvme9", 00:23:20.183 "trtype": "tcp", 00:23:20.183 "traddr": "10.0.0.2", 00:23:20.183 "adrfam": "ipv4", 00:23:20.183 "trsvcid": "4420", 00:23:20.183 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:20.183 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:20.183 "hdgst": false, 00:23:20.183 "ddgst": false 00:23:20.183 }, 00:23:20.183 "method": "bdev_nvme_attach_controller" 00:23:20.183 },{ 00:23:20.183 "params": { 00:23:20.183 "name": "Nvme10", 00:23:20.183 "trtype": "tcp", 00:23:20.183 "traddr": "10.0.0.2", 00:23:20.183 "adrfam": "ipv4", 00:23:20.183 "trsvcid": "4420", 00:23:20.183 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:20.183 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:20.183 "hdgst": false, 00:23:20.183 "ddgst": false 00:23:20.183 }, 00:23:20.183 "method": "bdev_nvme_attach_controller" 00:23:20.183 }' 00:23:20.183 [2024-07-24 19:15:25.828585] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:20.183 [2024-07-24 19:15:25.828673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:20.183 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.441 [2024-07-24 19:15:25.908051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.441 [2024-07-24 19:15:26.047141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1706765 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:22.340 19:15:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:23.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1706765 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1706582 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.276 "trtype": "$TEST_TRANSPORT", 00:23:23.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.276 "adrfam": "ipv4", 00:23:23.276 "trsvcid": "$NVMF_PORT", 00:23:23.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.276 "hdgst": ${hdgst:-false}, 00:23:23.276 "ddgst": ${ddgst:-false} 00:23:23.276 }, 00:23:23.276 "method": "bdev_nvme_attach_controller" 00:23:23.276 } 00:23:23.276 EOF 00:23:23.276 )") 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:23.276 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:23.276 { 00:23:23.276 "params": { 00:23:23.276 "name": "Nvme$subsystem", 00:23:23.277 "trtype": "$TEST_TRANSPORT", 00:23:23.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "$NVMF_PORT", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:23.277 "hdgst": ${hdgst:-false}, 00:23:23.277 "ddgst": ${ddgst:-false} 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 } 00:23:23.277 EOF 00:23:23.277 )") 00:23:23.277 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:23.277 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:23.277 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:23.277 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme1", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme2", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme3", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme4", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme5", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme6", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme7", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme8", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme9", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 },{ 00:23:23.277 "params": { 00:23:23.277 "name": "Nvme10", 00:23:23.277 "trtype": "tcp", 00:23:23.277 "traddr": "10.0.0.2", 00:23:23.277 "adrfam": "ipv4", 00:23:23.277 "trsvcid": "4420", 00:23:23.277 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:23.277 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:23.277 "hdgst": false, 00:23:23.277 "ddgst": false 00:23:23.277 }, 00:23:23.277 "method": "bdev_nvme_attach_controller" 00:23:23.277 }' 00:23:23.536 [2024-07-24 19:15:28.980168] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:23.536 [2024-07-24 19:15:28.980259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707182 ] 00:23:23.536 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.536 [2024-07-24 19:15:29.055023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.536 [2024-07-24 19:15:29.196582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.438 Running I/O for 1 seconds... 00:23:26.373 00:23:26.373 Latency(us) 00:23:26.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.373 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.373 Verification LBA range: start 0x0 length 0x400 00:23:26.373 Nvme1n1 : 1.15 167.68 10.48 0.00 0.00 376932.12 26020.22 329330.54 00:23:26.373 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.373 Verification LBA range: start 0x0 length 0x400 00:23:26.373 Nvme2n1 : 1.15 166.51 10.41 0.00 0.00 371271.55 51263.72 312242.63 00:23:26.373 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.373 Verification LBA range: start 0x0 length 0x400 00:23:26.373 Nvme3n1 : 1.14 168.81 10.55 0.00 0.00 357618.73 26991.12 344865.00 00:23:26.373 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.373 Verification LBA range: start 0x0 length 0x400 00:23:26.373 Nvme4n1 : 1.12 171.86 10.74 0.00 0.00 341516.14 23690.05 326223.64 00:23:26.373 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.373 Verification LBA range: start 0x0 length 0x400 00:23:26.373 Nvme5n1 : 1.19 161.44 10.09 0.00 0.00 358745.88 26214.40 346418.44 00:23:26.373 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.373 Verification LBA range: start 0x0 length 0x400 00:23:26.373 Nvme6n1 : 1.20 176.45 11.03 0.00 0.00 307750.71 16796.63 313796.08 00:23:26.373 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.373 Verification LBA range: start 0x0 length 0x400 00:23:26.374 Nvme7n1 : 1.24 209.43 13.09 0.00 0.00 263535.94 4466.16 349525.33 00:23:26.374 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.374 Verification LBA range: start 0x0 length 0x400 00:23:26.374 Nvme8n1 : 1.25 204.28 12.77 0.00 0.00 266609.40 19418.07 337097.77 00:23:26.374 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.374 Verification LBA range: start 0x0 length 0x400 00:23:26.374 Nvme9n1 : 1.23 158.90 9.93 0.00 0.00 325171.50 12524.66 361952.90 00:23:26.374 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:26.374 Verification LBA range: start 0x0 length 0x400 00:23:26.374 Nvme10n1 : 1.26 202.76 12.67 0.00 0.00 256958.67 8155.59 383701.14 00:23:26.374 =================================================================================================================== 00:23:26.374 Total : 1788.12 111.76 0.00 0.00 316965.87 4466.16 383701.14 00:23:26.631 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:26.632 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:26.632 rmmod nvme_tcp 00:23:26.889 rmmod nvme_fabrics 00:23:26.889 rmmod nvme_keyring 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1706582 ']' 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1706582 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1706582 ']' 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1706582 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706582 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706582' 00:23:26.889 killing process with pid 1706582 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1706582 00:23:26.889 19:15:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1706582 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.458 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:29.991 00:23:29.991 real 0m13.307s 00:23:29.991 user 0m37.453s 00:23:29.991 sys 0m3.957s 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.991 ************************************ 00:23:29.991 END TEST nvmf_shutdown_tc1 00:23:29.991 ************************************ 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:29.991 ************************************ 00:23:29.991 START TEST nvmf_shutdown_tc2 00:23:29.991 ************************************ 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:29.991 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:29.991 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:29.991 Found net devices under 0000:84:00.0: cvl_0_0 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:29.991 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:29.991 Found net devices under 0000:84:00.1: cvl_0_1 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.992 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:29.992 00:23:29.992 --- 10.0.0.2 ping statistics --- 00:23:29.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.992 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.992 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.992 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:23:29.992 00:23:29.992 --- 10.0.0.1 ping statistics --- 00:23:29.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.992 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1707948 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1707948 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1707948 ']' 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.992 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 [2024-07-24 19:15:35.413823] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:29.992 [2024-07-24 19:15:35.413932] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.992 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.992 [2024-07-24 19:15:35.533119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.251 [2024-07-24 19:15:35.716887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.251 [2024-07-24 19:15:35.716965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.251 [2024-07-24 19:15:35.717004] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.251 [2024-07-24 19:15:35.717038] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.251 [2024-07-24 19:15:35.717070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.251 [2024-07-24 19:15:35.717190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.251 [2024-07-24 19:15:35.717260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.251 [2024-07-24 19:15:35.717326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:30.251 [2024-07-24 19:15:35.717339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.251 [2024-07-24 19:15:35.933635] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.251 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.510 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:30.510 Malloc1 00:23:30.510 [2024-07-24 19:15:36.021802] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.510 Malloc2 00:23:30.510 Malloc3 00:23:30.510 Malloc4 00:23:30.510 Malloc5 00:23:30.768 Malloc6 00:23:30.768 Malloc7 00:23:30.768 Malloc8 00:23:30.768 Malloc9 00:23:31.027 Malloc10 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1708138 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1708138 /var/tmp/bdevperf.sock 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1708138 ']' 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:31.027 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.028 "method": "bdev_nvme_attach_controller" 00:23:31.028 } 00:23:31.028 EOF 00:23:31.028 )") 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.028 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.028 { 00:23:31.028 "params": { 00:23:31.028 "name": "Nvme$subsystem", 00:23:31.028 "trtype": "$TEST_TRANSPORT", 00:23:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.028 "adrfam": "ipv4", 00:23:31.028 "trsvcid": "$NVMF_PORT", 00:23:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.028 "hdgst": ${hdgst:-false}, 00:23:31.028 "ddgst": ${ddgst:-false} 00:23:31.028 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 } 00:23:31.029 EOF 00:23:31.029 )") 00:23:31.029 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.029 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:31.029 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:31.029 { 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme$subsystem", 00:23:31.029 "trtype": "$TEST_TRANSPORT", 00:23:31.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "$NVMF_PORT", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.029 "hdgst": ${hdgst:-false}, 00:23:31.029 "ddgst": ${ddgst:-false} 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 } 00:23:31.029 EOF 00:23:31.029 )") 00:23:31.029 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:31.029 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:31.029 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:31.029 19:15:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme1", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme2", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme3", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme4", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme5", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme6", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme7", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme8", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme9", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 },{ 00:23:31.029 "params": { 00:23:31.029 "name": "Nvme10", 00:23:31.029 "trtype": "tcp", 00:23:31.029 "traddr": "10.0.0.2", 00:23:31.029 "adrfam": "ipv4", 00:23:31.029 "trsvcid": "4420", 00:23:31.029 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:31.029 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:31.029 "hdgst": false, 00:23:31.029 "ddgst": false 00:23:31.029 }, 00:23:31.029 "method": "bdev_nvme_attach_controller" 00:23:31.029 }' 00:23:31.029 [2024-07-24 19:15:36.565266] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:31.029 [2024-07-24 19:15:36.565354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1708138 ] 00:23:31.029 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.029 [2024-07-24 19:15:36.642824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.288 [2024-07-24 19:15:36.782035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.188 Running I/O for 10 seconds... 00:23:33.188 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.188 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:33.188 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:33.188 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.188 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:33.448 19:15:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.706 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:33.707 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:33.707 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1708138 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1708138 ']' 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1708138 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1708138 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1708138' 00:23:33.965 killing process with pid 1708138 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1708138 00:23:33.965 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1708138 00:23:34.224 Received shutdown signal, test time was about 1.066465 seconds 00:23:34.224 00:23:34.224 Latency(us) 00:23:34.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:34.224 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme1n1 : 1.05 183.63 11.48 0.00 0.00 343638.47 24563.86 347971.89 00:23:34.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme2n1 : 1.03 186.81 11.68 0.00 0.00 328152.81 25437.68 313796.08 00:23:34.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme3n1 : 1.04 184.81 11.55 0.00 0.00 324454.27 25243.50 337097.77 00:23:34.224 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme4n1 : 1.03 186.02 11.63 0.00 0.00 313270.17 42331.40 312242.63 00:23:34.224 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme5n1 : 1.05 182.01 11.38 0.00 0.00 312077.27 27962.03 310689.19 00:23:34.224 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme6n1 : 1.05 182.76 11.42 0.00 0.00 302442.07 26020.22 302921.96 00:23:34.224 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme7n1 : 1.07 180.22 11.26 0.00 0.00 300406.46 22913.33 349525.33 00:23:34.224 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme8n1 : 1.06 180.90 11.31 0.00 0.00 291032.43 42331.40 324670.20 00:23:34.224 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme9n1 : 1.00 127.47 7.97 0.00 0.00 396265.24 27767.85 361952.90 00:23:34.224 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:34.224 Verification LBA range: start 0x0 length 0x400 00:23:34.224 Nvme10n1 : 1.02 125.82 7.86 0.00 0.00 390852.46 27767.85 379040.81 00:23:34.224 =================================================================================================================== 00:23:34.224 Total : 1720.46 107.53 0.00 0.00 325737.76 22913.33 379040.81 00:23:34.483 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1707948 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.459 rmmod nvme_tcp 00:23:35.459 rmmod nvme_fabrics 00:23:35.459 rmmod nvme_keyring 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1707948 ']' 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1707948 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1707948 ']' 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1707948 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.459 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1707948 00:23:35.717 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:35.717 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:35.717 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1707948' 00:23:35.717 killing process with pid 1707948 00:23:35.717 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1707948 00:23:35.717 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1707948 00:23:36.284 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.285 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:36.285 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:36.285 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:36.285 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:36.285 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.285 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.285 19:15:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:38.822 00:23:38.822 real 0m8.788s 00:23:38.822 user 0m27.626s 00:23:38.822 sys 0m1.749s 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.822 ************************************ 00:23:38.822 END TEST nvmf_shutdown_tc2 00:23:38.822 ************************************ 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:38.822 ************************************ 00:23:38.822 START TEST nvmf_shutdown_tc3 00:23:38.822 ************************************ 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.822 19:15:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:38.822 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:38.822 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.822 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:38.823 Found net devices under 0000:84:00.0: cvl_0_0 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:38.823 Found net devices under 0000:84:00.1: cvl_0_1 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:23:38.823 00:23:38.823 --- 10.0.0.2 ping statistics --- 00:23:38.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.823 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:23:38.823 00:23:38.823 --- 10.0.0.1 ping statistics --- 00:23:38.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.823 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1709171 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1709171 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1709171 ']' 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.823 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:38.823 [2024-07-24 19:15:44.307897] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:38.823 [2024-07-24 19:15:44.308061] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.823 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.823 [2024-07-24 19:15:44.435568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.082 [2024-07-24 19:15:44.578179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.082 [2024-07-24 19:15:44.578250] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.082 [2024-07-24 19:15:44.578271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.082 [2024-07-24 19:15:44.578288] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.082 [2024-07-24 19:15:44.578302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.082 [2024-07-24 19:15:44.578396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.083 [2024-07-24 19:15:44.578462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.083 [2024-07-24 19:15:44.578523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:39.083 [2024-07-24 19:15:44.578527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.083 [2024-07-24 19:15:44.747612] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.083 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.342 19:15:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.342 Malloc1 00:23:39.342 [2024-07-24 19:15:44.837706] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.342 Malloc2 00:23:39.342 Malloc3 00:23:39.342 Malloc4 00:23:39.342 Malloc5 00:23:39.605 Malloc6 00:23:39.605 Malloc7 00:23:39.605 Malloc8 00:23:39.605 Malloc9 00:23:39.605 Malloc10 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1709351 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1709351 /var/tmp/bdevperf.sock 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1709351 ']' 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:39.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.868 { 00:23:39.868 "params": { 00:23:39.868 "name": "Nvme$subsystem", 00:23:39.868 "trtype": "$TEST_TRANSPORT", 00:23:39.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.868 "adrfam": "ipv4", 00:23:39.868 "trsvcid": "$NVMF_PORT", 00:23:39.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.868 "hdgst": ${hdgst:-false}, 00:23:39.868 "ddgst": ${ddgst:-false} 00:23:39.868 }, 00:23:39.868 "method": "bdev_nvme_attach_controller" 00:23:39.868 } 00:23:39.868 EOF 00:23:39.868 )") 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.868 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.868 { 00:23:39.868 "params": { 00:23:39.868 "name": "Nvme$subsystem", 00:23:39.868 "trtype": "$TEST_TRANSPORT", 00:23:39.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.868 "adrfam": "ipv4", 00:23:39.868 "trsvcid": "$NVMF_PORT", 00:23:39.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:39.869 { 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme$subsystem", 00:23:39.869 "trtype": "$TEST_TRANSPORT", 00:23:39.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "$NVMF_PORT", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:39.869 "hdgst": ${hdgst:-false}, 00:23:39.869 "ddgst": ${ddgst:-false} 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 } 00:23:39.869 EOF 00:23:39.869 )") 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:39.869 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme1", 00:23:39.869 "trtype": "tcp", 00:23:39.869 "traddr": "10.0.0.2", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "4420", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.869 "hdgst": false, 00:23:39.869 "ddgst": false 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 },{ 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme2", 00:23:39.869 "trtype": "tcp", 00:23:39.869 "traddr": "10.0.0.2", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "4420", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:39.869 "hdgst": false, 00:23:39.869 "ddgst": false 00:23:39.869 }, 00:23:39.869 "method": "bdev_nvme_attach_controller" 00:23:39.869 },{ 00:23:39.869 "params": { 00:23:39.869 "name": "Nvme3", 00:23:39.869 "trtype": "tcp", 00:23:39.869 "traddr": "10.0.0.2", 00:23:39.869 "adrfam": "ipv4", 00:23:39.869 "trsvcid": "4420", 00:23:39.869 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:39.869 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:39.869 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 },{ 00:23:39.870 "params": { 00:23:39.870 "name": "Nvme4", 00:23:39.870 "trtype": "tcp", 00:23:39.870 "traddr": "10.0.0.2", 00:23:39.870 "adrfam": "ipv4", 00:23:39.870 "trsvcid": "4420", 00:23:39.870 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:39.870 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:39.870 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 },{ 00:23:39.870 "params": { 00:23:39.870 "name": "Nvme5", 00:23:39.870 "trtype": "tcp", 00:23:39.870 "traddr": "10.0.0.2", 00:23:39.870 "adrfam": "ipv4", 00:23:39.870 "trsvcid": "4420", 00:23:39.870 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:39.870 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:39.870 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 },{ 00:23:39.870 "params": { 00:23:39.870 "name": "Nvme6", 00:23:39.870 "trtype": "tcp", 00:23:39.870 "traddr": "10.0.0.2", 00:23:39.870 "adrfam": "ipv4", 00:23:39.870 "trsvcid": "4420", 00:23:39.870 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:39.870 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:39.870 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 },{ 00:23:39.870 "params": { 00:23:39.870 "name": "Nvme7", 00:23:39.870 "trtype": "tcp", 00:23:39.870 "traddr": "10.0.0.2", 00:23:39.870 "adrfam": "ipv4", 00:23:39.870 "trsvcid": "4420", 00:23:39.870 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:39.870 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:39.870 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 },{ 00:23:39.870 "params": { 00:23:39.870 "name": "Nvme8", 00:23:39.870 "trtype": "tcp", 00:23:39.870 "traddr": "10.0.0.2", 00:23:39.870 "adrfam": "ipv4", 00:23:39.870 "trsvcid": "4420", 00:23:39.870 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:39.870 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:39.870 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 },{ 00:23:39.870 "params": { 00:23:39.870 "name": "Nvme9", 00:23:39.870 "trtype": "tcp", 00:23:39.870 "traddr": "10.0.0.2", 00:23:39.870 "adrfam": "ipv4", 00:23:39.870 "trsvcid": "4420", 00:23:39.870 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:39.870 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:39.870 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 },{ 00:23:39.870 "params": { 00:23:39.870 "name": "Nvme10", 00:23:39.870 "trtype": "tcp", 00:23:39.870 "traddr": "10.0.0.2", 00:23:39.870 "adrfam": "ipv4", 00:23:39.870 "trsvcid": "4420", 00:23:39.870 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:39.870 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:39.870 "hdgst": false, 00:23:39.870 "ddgst": false 00:23:39.870 }, 00:23:39.870 "method": "bdev_nvme_attach_controller" 00:23:39.870 }' 00:23:39.870 [2024-07-24 19:15:45.396882] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:39.870 [2024-07-24 19:15:45.396971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709351 ] 00:23:39.870 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.870 [2024-07-24 19:15:45.475994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.128 [2024-07-24 19:15:45.615756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.504 Running I/O for 10 seconds... 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:42.071 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=18 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 18 -ge 100 ']' 00:23:42.072 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:42.330 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:42.330 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:42.330 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:42.330 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:42.330 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.330 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.330 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.588 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1709171 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1709171 ']' 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1709171 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1709171 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1709171' 00:23:42.861 killing process with pid 1709171 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1709171 00:23:42.861 19:15:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1709171 00:23:42.861 [2024-07-24 19:15:48.361569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.361987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.861 [2024-07-24 19:15:48.362343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.362724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a5dc0 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.364987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.862 [2024-07-24 19:15:48.365439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.365459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.365477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.365494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273d80 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.368084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a270 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.368425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.863 [2024-07-24 19:15:48.368586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.368604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8200 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.369442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.369984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.369976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.370021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1the state(5) to be set 00:23:42.863 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.370044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:42.863 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.370064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.370082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.370101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.370118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.370135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(5) to be set 00:23:42.863 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.370171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.370188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.370205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.370223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.370257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.863 [2024-07-24 19:15:48.370274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.863 [2024-07-24 19:15:48.370291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.863 [2024-07-24 19:15:48.370303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1the state(5) to be set 00:23:42.864 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:42.864 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.370446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1the state(5) to be set 00:23:42.864 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.370664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.370755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1the state(5) to be set 00:23:42.864 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.370952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.370970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.370987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.370987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1the state(5) to be set 00:23:42.864 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.371010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with [2024-07-24 19:15:48.371012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:42.864 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.371030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.371035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.371048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.371054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.371066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.371075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.864 [2024-07-24 19:15:48.371083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.864 [2024-07-24 19:15:48.371100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.864 [2024-07-24 19:15:48.371115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1[2024-07-24 19:15:48.371117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.371136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.371137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.371157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6740 is same with the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.371159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.371962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.371982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.372001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.372022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.372041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.372061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.372082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.372103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.372122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.372145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.865 [2024-07-24 19:15:48.372165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.865 [2024-07-24 19:15:48.372241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:42.865 [2024-07-24 19:15:48.372339] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17ad390 was disconnected and freed. reset controller. 00:23:42.865 [2024-07-24 19:15:48.372907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.372958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.372979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.372997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.373014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.373030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.865 [2024-07-24 19:15:48.373047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.373992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.374008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.374029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a6c20 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376609] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.866 [2024-07-24 19:15:48.376676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.376989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12dd610 is same with the state(5) to be set 00:23:42.867 [2024-07-24 19:15:48.377394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.377962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.377983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.378002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.378023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.378042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.378063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.378082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.378103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.378122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.378143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.378161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.378182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.378201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.867 [2024-07-24 19:15:48.378221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.867 [2024-07-24 19:15:48.378240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.378962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.378982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.378998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with [2024-07-24 19:15:48.379001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:42.868 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.379037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.379074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-07-24 19:15:48.379109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.379170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.379207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.379243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-07-24 19:15:48.379278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.379299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.379336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 [2024-07-24 19:15:48.379371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.868 [2024-07-24 19:15:48.379389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12[2024-07-24 19:15:48.379406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.868 the state(5) to be set 00:23:42.868 [2024-07-24 19:15:48.379438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12[2024-07-24 19:15:48.379511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.379532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12[2024-07-24 19:15:48.379643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with [2024-07-24 19:15:48.379662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:42.869 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with [2024-07-24 19:15:48.379686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12the state(5) to be set 00:23:42.869 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with [2024-07-24 19:15:48.379708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:42.869 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.379794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.379972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.379984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.379988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.380004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 19:15:48.380006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.380025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.380028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.380041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ddad0 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.380047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.380068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.380087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.380107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.869 [2024-07-24 19:15:48.380126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.380231] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16d7220 was disconnected and freed. reset controller. 00:23:42.869 [2024-07-24 19:15:48.381670] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:42.869 [2024-07-24 19:15:48.381768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a6950 (9): Bad file descriptor 00:23:42.869 [2024-07-24 19:15:48.381809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173a270 (9): Bad file descriptor 00:23:42.869 [2024-07-24 19:15:48.381881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.869 [2024-07-24 19:15:48.381909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.381930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.869 [2024-07-24 19:15:48.381949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.381969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.869 [2024-07-24 19:15:48.381987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.382012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.869 [2024-07-24 19:15:48.382031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.382048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc610 is same with the state(5) to be set 00:23:42.869 [2024-07-24 19:15:48.382134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.869 [2024-07-24 19:15:48.382163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.869 [2024-07-24 19:15:48.382183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0f80 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0d50 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8200 (9): Bad file descriptor 00:23:42.870 [2024-07-24 19:15:48.382632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-24 19:15:48.382771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:42.870 the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 19:15:48.382791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with [2024-07-24 19:15:48.382812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:23:42.870 id:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with [2024-07-24 19:15:48.382833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:42.870 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with [2024-07-24 19:15:48.382852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a740 is same the state(5) to be set 00:23:42.870 with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.870 [2024-07-24 19:15:48.382920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.870 [2024-07-24 19:15:48.382937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.870 [2024-07-24 19:15:48.382955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with [2024-07-24 19:15:48.382954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:23:42.870 id:0 cdw10:00000000 cdw11:00000000 00:23:42.871 [2024-07-24 19:15:48.382975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.382977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.382991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.871 [2024-07-24 19:15:48.383026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.383044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.871 [2024-07-24 19:15:48.383061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.383079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a360 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.871 [2024-07-24 19:15:48.383179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.383196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.871 [2024-07-24 19:15:48.383226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with [2024-07-24 19:15:48.383227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:42.871 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.383244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.871 [2024-07-24 19:15:48.383262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with [2024-07-24 19:15:48.383266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:23:42.871 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.383291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.871 [2024-07-24 19:15:48.383309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.383326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706ec0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.383774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12738a0 is same with the state(5) to be set 00:23:42.871 [2024-07-24 19:15:48.385435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.871 [2024-07-24 19:15:48.385821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.871 [2024-07-24 19:15:48.385847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.385870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.385889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.385910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.385929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.385950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.385969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.385990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.386960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.386979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.872 [2024-07-24 19:15:48.387477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.872 [2024-07-24 19:15:48.387498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.387960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.387981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.388000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.388021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.388040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.388061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.388080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.388180] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x184b110 was disconnected and freed. reset controller. 00:23:42.873 [2024-07-24 19:15:48.388391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:42.873 [2024-07-24 19:15:48.388447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170a740 (9): Bad file descriptor 00:23:42.873 [2024-07-24 19:15:48.390936] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.873 [2024-07-24 19:15:48.391171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.873 [2024-07-24 19:15:48.391212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a6950 with addr=10.0.0.2, port=4420 00:23:42.873 [2024-07-24 19:15:48.391236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a6950 is same with the state(5) to be set 00:23:42.873 [2024-07-24 19:15:48.391337] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.873 [2024-07-24 19:15:48.391710] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.873 [2024-07-24 19:15:48.392299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.873 [2024-07-24 19:15:48.392876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.873 [2024-07-24 19:15:48.392895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.392917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.392936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.392958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.392977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.392999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.393967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.393989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.874 [2024-07-24 19:15:48.394294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.874 [2024-07-24 19:15:48.394316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.394973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.394992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.395121] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1849c30 was disconnected and freed. reset controller. 00:23:42.875 [2024-07-24 19:15:48.395309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:42.875 [2024-07-24 19:15:48.395355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0f80 (9): Bad file descriptor 00:23:42.875 [2024-07-24 19:15:48.395523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.875 [2024-07-24 19:15:48.395569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170a740 with addr=10.0.0.2, port=4420 00:23:42.875 [2024-07-24 19:15:48.395592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a740 is same with the state(5) to be set 00:23:42.875 [2024-07-24 19:15:48.395619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a6950 (9): Bad file descriptor 00:23:42.875 [2024-07-24 19:15:48.395668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc610 (9): Bad file descriptor 00:23:42.875 [2024-07-24 19:15:48.395751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.875 [2024-07-24 19:15:48.395779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.395800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.875 [2024-07-24 19:15:48.395817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.395836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.875 [2024-07-24 19:15:48.395853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.395872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:42.875 [2024-07-24 19:15:48.395890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.395907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173abd0 is same with the state(5) to be set 00:23:42.875 [2024-07-24 19:15:48.395948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0d50 (9): Bad file descriptor 00:23:42.875 [2024-07-24 19:15:48.395996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170a360 (9): Bad file descriptor 00:23:42.875 [2024-07-24 19:15:48.396037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706ec0 (9): Bad file descriptor 00:23:42.875 [2024-07-24 19:15:48.396238] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.875 [2024-07-24 19:15:48.396367] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.875 [2024-07-24 19:15:48.398409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:42.875 [2024-07-24 19:15:48.398498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170a740 (9): Bad file descriptor 00:23:42.875 [2024-07-24 19:15:48.398531] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:42.875 [2024-07-24 19:15:48.398550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:42.875 [2024-07-24 19:15:48.398570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:42.875 [2024-07-24 19:15:48.398666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.398696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.398725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.398747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.398771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.398791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.398812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.398831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.398859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.398880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.398902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.398921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.398944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.398963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.398985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.399004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.399026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.399044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.399066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.399085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.399106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.875 [2024-07-24 19:15:48.399126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.875 [2024-07-24 19:15:48.399147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.399980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.399999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.876 [2024-07-24 19:15:48.400462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.876 [2024-07-24 19:15:48.400484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.400975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.400994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.401314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.401347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a01b0 is same with the state(5) to be set 00:23:42.877 [2024-07-24 19:15:48.403182] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:42.877 [2024-07-24 19:15:48.403285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.403961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.877 [2024-07-24 19:15:48.403983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.877 [2024-07-24 19:15:48.404002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.404970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.404989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.405010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.405029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.405050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.405070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.405091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.405110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.405130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.405149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.405171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.405190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.405211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.405229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.878 [2024-07-24 19:15:48.405252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.878 [2024-07-24 19:15:48.405272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.405929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.405948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17413f0 is same with the state(5) to be set 00:23:42.879 [2024-07-24 19:15:48.409013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.879 [2024-07-24 19:15:48.409052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:42.879 [2024-07-24 19:15:48.409084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:42.879 [2024-07-24 19:15:48.409360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.879 [2024-07-24 19:15:48.409400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0f80 with addr=10.0.0.2, port=4420 00:23:42.879 [2024-07-24 19:15:48.409423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0f80 is same with the state(5) to be set 00:23:42.879 [2024-07-24 19:15:48.409633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.879 [2024-07-24 19:15:48.409667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0d50 with addr=10.0.0.2, port=4420 00:23:42.879 [2024-07-24 19:15:48.409688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0d50 is same with the state(5) to be set 00:23:42.879 [2024-07-24 19:15:48.409708] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:42.879 [2024-07-24 19:15:48.409727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:42.879 [2024-07-24 19:15:48.409747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:42.879 [2024-07-24 19:15:48.409840] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.879 [2024-07-24 19:15:48.409894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173abd0 (9): Bad file descriptor 00:23:42.879 [2024-07-24 19:15:48.409960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0d50 (9): Bad file descriptor 00:23:42.879 [2024-07-24 19:15:48.409994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0f80 (9): Bad file descriptor 00:23:42.879 [2024-07-24 19:15:48.410532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.879 [2024-07-24 19:15:48.410771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.879 [2024-07-24 19:15:48.410808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8200 with addr=10.0.0.2, port=4420 00:23:42.879 [2024-07-24 19:15:48.410830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8200 is same with the state(5) to be set 00:23:42.879 [2024-07-24 19:15:48.411009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.879 [2024-07-24 19:15:48.411043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173a270 with addr=10.0.0.2, port=4420 00:23:42.879 [2024-07-24 19:15:48.411064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a270 is same with the state(5) to be set 00:23:42.879 [2024-07-24 19:15:48.411528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.411559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.411590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.411612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.411634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.411653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.411686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.411707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.411729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.411749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.411770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.879 [2024-07-24 19:15:48.411789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.879 [2024-07-24 19:15:48.411810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.411829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.411851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.411870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.411891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.411910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.411931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.411950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.411972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.411991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.412965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.412987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.880 [2024-07-24 19:15:48.413338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.880 [2024-07-24 19:15:48.413359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.413967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.413988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.414008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.414029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.414049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.414070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.414089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.414110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.414129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.414150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.414169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.414191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.414210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.414229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790670 is same with the state(5) to be set 00:23:42.881 [2024-07-24 19:15:48.415964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.415998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.881 [2024-07-24 19:15:48.416403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.881 [2024-07-24 19:15:48.416423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.416967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.416989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.417973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.417992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.418013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.882 [2024-07-24 19:15:48.418032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.882 [2024-07-24 19:15:48.418054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.418634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.418659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c9ee0 is same with the state(5) to be set 00:23:42.883 [2024-07-24 19:15:48.420332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.420978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.420997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.421018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.421036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.883 [2024-07-24 19:15:48.421058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.883 [2024-07-24 19:15:48.421086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.421973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.421992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.884 [2024-07-24 19:15:48.422718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.884 [2024-07-24 19:15:48.422740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.422759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.422780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.422799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.422821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.422839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.422861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.422880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.422903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.422921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.422949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.422969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.422991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.423010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.423030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183c8a0 is same with the state(5) to be set 00:23:42.885 [2024-07-24 19:15:48.425106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:42.885 [2024-07-24 19:15:48.425150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:42.885 [2024-07-24 19:15:48.425174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:42.885 [2024-07-24 19:15:48.425201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:42.885 [2024-07-24 19:15:48.425290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8200 (9): Bad file descriptor 00:23:42.885 [2024-07-24 19:15:48.425325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173a270 (9): Bad file descriptor 00:23:42.885 [2024-07-24 19:15:48.425349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:42.885 [2024-07-24 19:15:48.425368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:42.885 [2024-07-24 19:15:48.425389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:42.885 [2024-07-24 19:15:48.425421] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:42.885 [2024-07-24 19:15:48.425452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:42.885 [2024-07-24 19:15:48.425471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:42.885 [2024-07-24 19:15:48.425566] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.885 [2024-07-24 19:15:48.425599] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.885 [2024-07-24 19:15:48.425627] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.885 [2024-07-24 19:15:48.425653] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.885 [2024-07-24 19:15:48.425780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.885 [2024-07-24 19:15:48.425809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.885 [2024-07-24 19:15:48.426063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.885 [2024-07-24 19:15:48.426102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a6950 with addr=10.0.0.2, port=4420 00:23:42.885 [2024-07-24 19:15:48.426126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a6950 is same with the state(5) to be set 00:23:42.885 [2024-07-24 19:15:48.426334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.885 [2024-07-24 19:15:48.426368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1706ec0 with addr=10.0.0.2, port=4420 00:23:42.885 [2024-07-24 19:15:48.426389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1706ec0 is same with the state(5) to be set 00:23:42.885 [2024-07-24 19:15:48.426623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.885 [2024-07-24 19:15:48.426660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170a360 with addr=10.0.0.2, port=4420 00:23:42.885 [2024-07-24 19:15:48.426682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a360 is same with the state(5) to be set 00:23:42.885 [2024-07-24 19:15:48.426826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.885 [2024-07-24 19:15:48.426860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dc610 with addr=10.0.0.2, port=4420 00:23:42.885 [2024-07-24 19:15:48.426881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dc610 is same with the state(5) to be set 00:23:42.885 [2024-07-24 19:15:48.426901] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:42.885 [2024-07-24 19:15:48.426918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:42.885 [2024-07-24 19:15:48.426936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:42.885 [2024-07-24 19:15:48.426964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:42.885 [2024-07-24 19:15:48.426984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:42.885 [2024-07-24 19:15:48.427004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:42.885 [2024-07-24 19:15:48.428158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.885 [2024-07-24 19:15:48.428866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.885 [2024-07-24 19:15:48.428887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.428905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.428926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.428944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.428965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.428984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.429968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.429989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.886 [2024-07-24 19:15:48.430499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.886 [2024-07-24 19:15:48.430519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.887 [2024-07-24 19:15:48.430564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.887 [2024-07-24 19:15:48.430605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.887 [2024-07-24 19:15:48.430645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.887 [2024-07-24 19:15:48.430685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.887 [2024-07-24 19:15:48.430725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.887 [2024-07-24 19:15:48.430765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:42.887 [2024-07-24 19:15:48.430805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:42.887 [2024-07-24 19:15:48.430824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184c500 is same with the state(5) to be set 00:23:42.887 [2024-07-24 19:15:48.433027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:42.887 [2024-07-24 19:15:48.433067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.887 [2024-07-24 19:15:48.433087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.887 task offset: 26112 on job bdev=Nvme2n1 fails 00:23:42.887 00:23:42.887 Latency(us) 00:23:42.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.887 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme1n1 ended in about 1.28 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme1n1 : 1.28 99.87 6.24 49.93 0.00 422709.73 27573.67 329330.54 00:23:42.887 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme2n1 ended in about 1.26 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme2n1 : 1.26 152.86 9.55 50.95 0.00 304371.01 6650.69 324670.20 00:23:42.887 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme3n1 ended in about 1.29 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme3n1 : 1.29 148.31 9.27 49.44 0.00 307768.89 24563.86 347971.89 00:23:42.887 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme4n1 ended in about 1.30 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme4n1 : 1.30 150.89 9.43 49.27 0.00 297945.70 24078.41 344865.00 00:23:42.887 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme5n1 ended in about 1.26 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme5n1 : 1.26 151.89 9.49 50.63 0.00 287649.00 10728.49 312242.63 00:23:42.887 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme6n1 ended in about 1.30 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme6n1 : 1.30 98.21 6.14 49.10 0.00 388509.14 25437.68 360399.45 00:23:42.887 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme7n1 ended in about 1.28 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme7n1 : 1.28 104.17 6.51 50.13 0.00 361932.45 22524.97 360399.45 00:23:42.887 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme8n1 ended in about 1.27 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme8n1 : 1.27 151.22 9.45 50.41 0.00 270460.68 8155.59 352632.23 00:23:42.887 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme9n1 ended in about 1.31 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme9n1 : 1.31 97.63 6.10 48.81 0.00 366543.96 28932.93 383701.14 00:23:42.887 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:42.887 Job: Nvme10n1 ended in about 1.29 seconds with error 00:23:42.887 Verification LBA range: start 0x0 length 0x400 00:23:42.887 Nvme10n1 : 1.29 99.51 6.22 49.75 0.00 350411.28 34952.53 354185.67 00:23:42.887 =================================================================================================================== 00:23:42.887 Total : 1254.55 78.41 498.43 0.00 329817.65 6650.69 383701.14 00:23:42.887 [2024-07-24 19:15:48.468512] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:42.887 [2024-07-24 19:15:48.468608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:42.887 [2024-07-24 19:15:48.468712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a6950 (9): Bad file descriptor 00:23:42.887 [2024-07-24 19:15:48.468751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1706ec0 (9): Bad file descriptor 00:23:42.887 [2024-07-24 19:15:48.468778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170a360 (9): Bad file descriptor 00:23:42.887 [2024-07-24 19:15:48.468803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dc610 (9): Bad file descriptor 00:23:42.887 [2024-07-24 19:15:48.469260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.887 [2024-07-24 19:15:48.469307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x170a740 with addr=10.0.0.2, port=4420 00:23:42.887 [2024-07-24 19:15:48.469333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170a740 is same with the state(5) to be set 00:23:42.887 [2024-07-24 19:15:48.469542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.887 [2024-07-24 19:15:48.469580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173abd0 with addr=10.0.0.2, port=4420 00:23:42.887 [2024-07-24 19:15:48.469602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173abd0 is same with the state(5) to be set 00:23:42.887 [2024-07-24 19:15:48.469623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:42.887 [2024-07-24 19:15:48.469641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:42.887 [2024-07-24 19:15:48.469661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:42.887 [2024-07-24 19:15:48.469689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:42.887 [2024-07-24 19:15:48.469731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:42.887 [2024-07-24 19:15:48.469750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:42.887 [2024-07-24 19:15:48.469773] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:42.887 [2024-07-24 19:15:48.469791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:42.887 [2024-07-24 19:15:48.469809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:42.887 [2024-07-24 19:15:48.469830] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:42.887 [2024-07-24 19:15:48.469848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:42.887 [2024-07-24 19:15:48.469866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:42.887 [2024-07-24 19:15:48.469936] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.887 [2024-07-24 19:15:48.469966] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.887 [2024-07-24 19:15:48.469991] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.887 [2024-07-24 19:15:48.470014] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:42.887 [2024-07-24 19:15:48.470527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.887 [2024-07-24 19:15:48.470562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.887 [2024-07-24 19:15:48.470579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.887 [2024-07-24 19:15:48.470594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.887 [2024-07-24 19:15:48.470629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170a740 (9): Bad file descriptor 00:23:42.887 [2024-07-24 19:15:48.470658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173abd0 (9): Bad file descriptor 00:23:42.887 [2024-07-24 19:15:48.471025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:42.887 [2024-07-24 19:15:48.471065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:42.887 [2024-07-24 19:15:48.471089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:42.887 [2024-07-24 19:15:48.471141] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:42.887 [2024-07-24 19:15:48.471164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:42.887 [2024-07-24 19:15:48.471183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:42.887 [2024-07-24 19:15:48.471206] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:42.888 [2024-07-24 19:15:48.471224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:42.888 [2024-07-24 19:15:48.471242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:42.888 [2024-07-24 19:15:48.471294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:42.888 [2024-07-24 19:15:48.471335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.888 [2024-07-24 19:15:48.471357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.888 [2024-07-24 19:15:48.471560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.888 [2024-07-24 19:15:48.471606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0d50 with addr=10.0.0.2, port=4420 00:23:42.888 [2024-07-24 19:15:48.471631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0d50 is same with the state(5) to be set 00:23:42.888 [2024-07-24 19:15:48.471851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.888 [2024-07-24 19:15:48.471887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a0f80 with addr=10.0.0.2, port=4420 00:23:42.888 [2024-07-24 19:15:48.471909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0f80 is same with the state(5) to be set 00:23:42.888 [2024-07-24 19:15:48.472086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.888 [2024-07-24 19:15:48.472121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x173a270 with addr=10.0.0.2, port=4420 00:23:42.888 [2024-07-24 19:15:48.472142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a270 is same with the state(5) to be set 00:23:42.888 [2024-07-24 19:15:48.472368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:42.888 [2024-07-24 19:15:48.472405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8200 with addr=10.0.0.2, port=4420 00:23:42.888 [2024-07-24 19:15:48.472435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8200 is same with the state(5) to be set 00:23:42.888 [2024-07-24 19:15:48.472463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0d50 (9): Bad file descriptor 00:23:42.888 [2024-07-24 19:15:48.472490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0f80 (9): Bad file descriptor 00:23:42.888 [2024-07-24 19:15:48.472515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x173a270 (9): Bad file descriptor 00:23:42.888 [2024-07-24 19:15:48.472574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8200 (9): Bad file descriptor 00:23:42.888 [2024-07-24 19:15:48.472603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:42.888 [2024-07-24 19:15:48.472622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:42.888 [2024-07-24 19:15:48.472640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:42.888 [2024-07-24 19:15:48.472662] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:42.888 [2024-07-24 19:15:48.472681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:42.888 [2024-07-24 19:15:48.472699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:42.888 [2024-07-24 19:15:48.472719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:42.888 [2024-07-24 19:15:48.472736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:42.888 [2024-07-24 19:15:48.472754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:42.888 [2024-07-24 19:15:48.472805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.888 [2024-07-24 19:15:48.472829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.888 [2024-07-24 19:15:48.472845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:42.888 [2024-07-24 19:15:48.472861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:42.888 [2024-07-24 19:15:48.472878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:42.888 [2024-07-24 19:15:48.472895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:42.888 [2024-07-24 19:15:48.472953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.455 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:43.455 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1709351 00:23:44.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1709351) - No such process 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.393 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.652 rmmod nvme_tcp 00:23:44.652 rmmod nvme_fabrics 00:23:44.652 rmmod nvme_keyring 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.653 19:15:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.555 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:46.555 00:23:46.555 real 0m8.196s 00:23:46.555 user 0m20.888s 00:23:46.555 sys 0m1.806s 00:23:46.555 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.555 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.555 ************************************ 00:23:46.555 END TEST nvmf_shutdown_tc3 00:23:46.555 ************************************ 00:23:46.555 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:46.555 00:23:46.555 real 0m30.542s 00:23:46.555 user 1m26.069s 00:23:46.555 sys 0m7.678s 00:23:46.555 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.555 19:15:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:46.555 ************************************ 00:23:46.556 END TEST nvmf_shutdown 00:23:46.556 ************************************ 00:23:46.556 19:15:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:23:46.556 00:23:46.556 real 12m47.756s 00:23:46.556 user 30m35.609s 00:23:46.556 sys 3m0.564s 00:23:46.816 19:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:46.816 19:15:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.816 ************************************ 00:23:46.816 END TEST nvmf_target_extra 00:23:46.816 ************************************ 00:23:46.816 19:15:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:46.816 19:15:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.816 19:15:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.816 19:15:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:46.816 ************************************ 00:23:46.816 START TEST nvmf_host 00:23:46.816 ************************************ 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:46.816 * Looking for test storage... 00:23:46.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.816 ************************************ 00:23:46.816 START TEST nvmf_multicontroller 00:23:46.816 ************************************ 00:23:46.816 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:47.075 * Looking for test storage... 00:23:47.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.075 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.075 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:47.075 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.075 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.075 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.075 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.076 19:15:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:50.366 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:50.366 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.366 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:50.367 Found net devices under 0000:84:00.0: cvl_0_0 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:50.367 Found net devices under 0000:84:00.1: cvl_0_1 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:50.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:23:50.367 00:23:50.367 --- 10.0.0.2 ping statistics --- 00:23:50.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.367 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:23:50.367 00:23:50.367 --- 10.0.0.1 ping statistics --- 00:23:50.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.367 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1711930 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1711930 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1711930 ']' 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.367 19:15:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.367 [2024-07-24 19:15:55.585567] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:50.367 [2024-07-24 19:15:55.585664] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.367 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.367 [2024-07-24 19:15:55.681569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.367 [2024-07-24 19:15:55.823145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.367 [2024-07-24 19:15:55.823210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.367 [2024-07-24 19:15:55.823230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.367 [2024-07-24 19:15:55.823246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.367 [2024-07-24 19:15:55.823262] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.367 [2024-07-24 19:15:55.823371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.367 [2024-07-24 19:15:55.823423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.367 [2024-07-24 19:15:55.823423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.302 [2024-07-24 19:15:56.936096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.302 Malloc0 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.302 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:51.303 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.303 19:15:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 [2024-07-24 19:15:57.008139] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 [2024-07-24 19:15:57.016007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 Malloc1 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1712207 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1712207 /var/tmp/bdevperf.sock 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1712207 ']' 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.562 19:15:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.496 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.496 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:52.496 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:52.496 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.496 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.755 NVMe0n1 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.755 1 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.755 request: 00:23:52.755 { 00:23:52.755 "name": "NVMe0", 00:23:52.755 "trtype": "tcp", 00:23:52.755 "traddr": "10.0.0.2", 00:23:52.755 "adrfam": "ipv4", 00:23:52.755 "trsvcid": "4420", 00:23:52.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.755 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:52.755 "hostaddr": "10.0.0.2", 00:23:52.755 "hostsvcid": "60000", 00:23:52.755 "prchk_reftag": false, 00:23:52.755 "prchk_guard": false, 00:23:52.755 "hdgst": false, 00:23:52.755 "ddgst": false, 00:23:52.755 "method": "bdev_nvme_attach_controller", 00:23:52.755 "req_id": 1 00:23:52.755 } 00:23:52.755 Got JSON-RPC error response 00:23:52.755 response: 00:23:52.755 { 00:23:52.755 "code": -114, 00:23:52.755 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:52.755 } 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.755 request: 00:23:52.755 { 00:23:52.755 "name": "NVMe0", 00:23:52.755 "trtype": "tcp", 00:23:52.755 "traddr": "10.0.0.2", 00:23:52.755 "adrfam": "ipv4", 00:23:52.755 "trsvcid": "4420", 00:23:52.755 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.755 "hostaddr": "10.0.0.2", 00:23:52.755 "hostsvcid": "60000", 00:23:52.755 "prchk_reftag": false, 00:23:52.755 "prchk_guard": false, 00:23:52.755 "hdgst": false, 00:23:52.755 "ddgst": false, 00:23:52.755 "method": "bdev_nvme_attach_controller", 00:23:52.755 "req_id": 1 00:23:52.755 } 00:23:52.755 Got JSON-RPC error response 00:23:52.755 response: 00:23:52.755 { 00:23:52.755 "code": -114, 00:23:52.755 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:52.755 } 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.755 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.756 request: 00:23:52.756 { 00:23:52.756 "name": "NVMe0", 00:23:52.756 "trtype": "tcp", 00:23:52.756 "traddr": "10.0.0.2", 00:23:52.756 "adrfam": "ipv4", 00:23:52.756 "trsvcid": "4420", 00:23:52.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.756 "hostaddr": "10.0.0.2", 00:23:52.756 "hostsvcid": "60000", 00:23:52.756 "prchk_reftag": false, 00:23:52.756 "prchk_guard": false, 00:23:52.756 "hdgst": false, 00:23:52.756 "ddgst": false, 00:23:52.756 "multipath": "disable", 00:23:52.756 "method": "bdev_nvme_attach_controller", 00:23:52.756 "req_id": 1 00:23:52.756 } 00:23:52.756 Got JSON-RPC error response 00:23:52.756 response: 00:23:52.756 { 00:23:52.756 "code": -114, 00:23:52.756 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:52.756 } 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.756 request: 00:23:52.756 { 00:23:52.756 "name": "NVMe0", 00:23:52.756 "trtype": "tcp", 00:23:52.756 "traddr": "10.0.0.2", 00:23:52.756 "adrfam": "ipv4", 00:23:52.756 "trsvcid": "4420", 00:23:52.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.756 "hostaddr": "10.0.0.2", 00:23:52.756 "hostsvcid": "60000", 00:23:52.756 "prchk_reftag": false, 00:23:52.756 "prchk_guard": false, 00:23:52.756 "hdgst": false, 00:23:52.756 "ddgst": false, 00:23:52.756 "multipath": "failover", 00:23:52.756 "method": "bdev_nvme_attach_controller", 00:23:52.756 "req_id": 1 00:23:52.756 } 00:23:52.756 Got JSON-RPC error response 00:23:52.756 response: 00:23:52.756 { 00:23:52.756 "code": -114, 00:23:52.756 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:52.756 } 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.756 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.013 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.013 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.271 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:53.271 19:15:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.206 0 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1712207 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1712207 ']' 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1712207 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.206 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1712207 00:23:54.464 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.464 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.464 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1712207' 00:23:54.464 killing process with pid 1712207 00:23:54.464 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1712207 00:23:54.465 19:15:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1712207 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:54.723 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:54.723 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:54.724 [2024-07-24 19:15:57.120998] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:23:54.724 [2024-07-24 19:15:57.121088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712207 ] 00:23:54.724 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.724 [2024-07-24 19:15:57.192807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.724 [2024-07-24 19:15:57.331762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.724 [2024-07-24 19:15:58.712330] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 38d24247-20f5-417b-8b27-b739bc94efec already exists 00:23:54.724 [2024-07-24 19:15:58.712385] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:38d24247-20f5-417b-8b27-b739bc94efec alias for bdev NVMe1n1 00:23:54.724 [2024-07-24 19:15:58.712406] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:54.724 Running I/O for 1 seconds... 00:23:54.724 00:23:54.724 Latency(us) 00:23:54.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.724 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:54.724 NVMe0n1 : 1.00 14032.46 54.81 0.00 0.00 9104.70 4126.34 17087.91 00:23:54.724 =================================================================================================================== 00:23:54.724 Total : 14032.46 54.81 0.00 0.00 9104.70 4126.34 17087.91 00:23:54.724 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.724 00:23:54.724 Latency(us) 00:23:54.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.724 =================================================================================================================== 00:23:54.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.724 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.724 rmmod nvme_tcp 00:23:54.724 rmmod nvme_fabrics 00:23:54.724 rmmod nvme_keyring 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1711930 ']' 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1711930 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1711930 ']' 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1711930 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1711930 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1711930' 00:23:54.724 killing process with pid 1711930 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1711930 00:23:54.724 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1711930 00:23:55.291 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:55.291 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:55.291 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:55.292 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.292 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:55.292 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.292 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.292 19:16:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:57.195 00:23:57.195 real 0m10.304s 00:23:57.195 user 0m19.279s 00:23:57.195 sys 0m3.194s 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.195 ************************************ 00:23:57.195 END TEST nvmf_multicontroller 00:23:57.195 ************************************ 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.195 ************************************ 00:23:57.195 START TEST nvmf_aer 00:23:57.195 ************************************ 00:23:57.195 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:57.454 * Looking for test storage... 00:23:57.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.454 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.454 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:57.454 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.454 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.454 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.454 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.455 19:16:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:59.988 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:59.988 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:59.988 Found net devices under 0000:84:00.0: cvl_0_0 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:59.988 Found net devices under 0000:84:00.1: cvl_0_1 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.988 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:24:00.248 00:24:00.248 --- 10.0.0.2 ping statistics --- 00:24:00.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.248 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:24:00.248 00:24:00.248 --- 10.0.0.1 ping statistics --- 00:24:00.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.248 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1714566 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1714566 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1714566 ']' 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.248 19:16:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.248 [2024-07-24 19:16:05.808109] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:24:00.248 [2024-07-24 19:16:05.808213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.248 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.248 [2024-07-24 19:16:05.916876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.507 [2024-07-24 19:16:06.118158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.507 [2024-07-24 19:16:06.118266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.507 [2024-07-24 19:16:06.118302] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.507 [2024-07-24 19:16:06.118331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.507 [2024-07-24 19:16:06.118357] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.507 [2024-07-24 19:16:06.118508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.507 [2024-07-24 19:16:06.118569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.507 [2024-07-24 19:16:06.118629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.507 [2024-07-24 19:16:06.118633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.766 [2024-07-24 19:16:06.305661] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.766 Malloc0 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.766 [2024-07-24 19:16:06.363490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.766 [ 00:24:00.766 { 00:24:00.766 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:00.766 "subtype": "Discovery", 00:24:00.766 "listen_addresses": [], 00:24:00.766 "allow_any_host": true, 00:24:00.766 "hosts": [] 00:24:00.766 }, 00:24:00.766 { 00:24:00.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.766 "subtype": "NVMe", 00:24:00.766 "listen_addresses": [ 00:24:00.766 { 00:24:00.766 "trtype": "TCP", 00:24:00.766 "adrfam": "IPv4", 00:24:00.766 "traddr": "10.0.0.2", 00:24:00.766 "trsvcid": "4420" 00:24:00.766 } 00:24:00.766 ], 00:24:00.766 "allow_any_host": true, 00:24:00.766 "hosts": [], 00:24:00.766 "serial_number": "SPDK00000000000001", 00:24:00.766 "model_number": "SPDK bdev Controller", 00:24:00.766 "max_namespaces": 2, 00:24:00.766 "min_cntlid": 1, 00:24:00.766 "max_cntlid": 65519, 00:24:00.766 "namespaces": [ 00:24:00.766 { 00:24:00.766 "nsid": 1, 00:24:00.766 "bdev_name": "Malloc0", 00:24:00.766 "name": "Malloc0", 00:24:00.766 "nguid": "0E909816A06B4AFFBC60243E80547370", 00:24:00.766 "uuid": "0e909816-a06b-4aff-bc60-243e80547370" 00:24:00.766 } 00:24:00.766 ] 00:24:00.766 } 00:24:00.766 ] 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1714707 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:00.766 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:00.766 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.024 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.283 Malloc1 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.283 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.283 [ 00:24:01.283 { 00:24:01.283 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:01.283 "subtype": "Discovery", 00:24:01.283 "listen_addresses": [], 00:24:01.283 "allow_any_host": true, 00:24:01.283 "hosts": [] 00:24:01.283 }, 00:24:01.283 { 00:24:01.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.283 "subtype": "NVMe", 00:24:01.283 "listen_addresses": [ 00:24:01.283 { 00:24:01.283 "trtype": "TCP", 00:24:01.283 "adrfam": "IPv4", 00:24:01.283 "traddr": "10.0.0.2", 00:24:01.283 "trsvcid": "4420" 00:24:01.283 } 00:24:01.283 ], 00:24:01.283 "allow_any_host": true, 00:24:01.283 "hosts": [], 00:24:01.283 "serial_number": "SPDK00000000000001", 00:24:01.283 "model_number": "SPDK bdev Controller", 00:24:01.283 "max_namespaces": 2, 00:24:01.283 "min_cntlid": 1, 00:24:01.283 "max_cntlid": 65519, 00:24:01.283 "namespaces": [ 00:24:01.283 { 00:24:01.283 "nsid": 1, 00:24:01.283 "bdev_name": "Malloc0", 00:24:01.283 "name": "Malloc0", 00:24:01.283 "nguid": "0E909816A06B4AFFBC60243E80547370", 00:24:01.283 "uuid": "0e909816-a06b-4aff-bc60-243e80547370" 00:24:01.283 }, 00:24:01.283 { 00:24:01.283 "nsid": 2, 00:24:01.283 "bdev_name": "Malloc1", 00:24:01.283 "name": "Malloc1", 00:24:01.283 "nguid": "A2C4519E75CC44D2ACBBC841B529D465", 00:24:01.283 "uuid": "a2c4519e-75cc-44d2-acbb-c841b529d465" 00:24:01.283 } 00:24:01.283 ] 00:24:01.283 } 00:24:01.283 ] 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1714707 00:24:01.284 Asynchronous Event Request test 00:24:01.284 Attaching to 10.0.0.2 00:24:01.284 Attached to 10.0.0.2 00:24:01.284 Registering asynchronous event callbacks... 00:24:01.284 Starting namespace attribute notice tests for all controllers... 00:24:01.284 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:01.284 aer_cb - Changed Namespace 00:24:01.284 Cleaning up... 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.284 rmmod nvme_tcp 00:24:01.284 rmmod nvme_fabrics 00:24:01.284 rmmod nvme_keyring 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1714566 ']' 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1714566 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1714566 ']' 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1714566 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.284 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1714566 00:24:01.541 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:01.541 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:01.541 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1714566' 00:24:01.541 killing process with pid 1714566 00:24:01.542 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1714566 00:24:01.542 19:16:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1714566 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.799 19:16:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.329 00:24:04.329 real 0m6.604s 00:24:04.329 user 0m5.658s 00:24:04.329 sys 0m2.633s 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.329 ************************************ 00:24:04.329 END TEST nvmf_aer 00:24:04.329 ************************************ 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.329 ************************************ 00:24:04.329 START TEST nvmf_async_init 00:24:04.329 ************************************ 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:04.329 * Looking for test storage... 00:24:04.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.329 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9cec0f4b5fc8422f9fe6d8c6b1877383 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.330 19:16:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:06.863 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:06.863 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:06.863 Found net devices under 0000:84:00.0: cvl_0_0 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:06.863 Found net devices under 0000:84:00.1: cvl_0_1 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.863 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.864 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:24:07.137 00:24:07.137 --- 10.0.0.2 ping statistics --- 00:24:07.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.137 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:24:07.137 00:24:07.137 --- 10.0.0.1 ping statistics --- 00:24:07.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.137 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1716793 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1716793 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1716793 ']' 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:07.137 19:16:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.137 [2024-07-24 19:16:12.688689] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:24:07.137 [2024-07-24 19:16:12.688855] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.137 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.137 [2024-07-24 19:16:12.816946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.416 [2024-07-24 19:16:13.006960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.416 [2024-07-24 19:16:13.007065] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.416 [2024-07-24 19:16:13.007103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.416 [2024-07-24 19:16:13.007134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.416 [2024-07-24 19:16:13.007161] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.416 [2024-07-24 19:16:13.007226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.675 [2024-07-24 19:16:13.232347] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.675 null0 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9cec0f4b5fc8422f9fe6d8c6b1877383 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.675 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.676 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:07.676 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.676 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.676 [2024-07-24 19:16:13.277878] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.676 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.676 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:07.676 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.676 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.933 nvme0n1 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.933 [ 00:24:07.933 { 00:24:07.933 "name": "nvme0n1", 00:24:07.933 "aliases": [ 00:24:07.933 "9cec0f4b-5fc8-422f-9fe6-d8c6b1877383" 00:24:07.933 ], 00:24:07.933 "product_name": "NVMe disk", 00:24:07.933 "block_size": 512, 00:24:07.933 "num_blocks": 2097152, 00:24:07.933 "uuid": "9cec0f4b-5fc8-422f-9fe6-d8c6b1877383", 00:24:07.933 "assigned_rate_limits": { 00:24:07.933 "rw_ios_per_sec": 0, 00:24:07.933 "rw_mbytes_per_sec": 0, 00:24:07.933 "r_mbytes_per_sec": 0, 00:24:07.933 "w_mbytes_per_sec": 0 00:24:07.933 }, 00:24:07.933 "claimed": false, 00:24:07.933 "zoned": false, 00:24:07.933 "supported_io_types": { 00:24:07.933 "read": true, 00:24:07.933 "write": true, 00:24:07.933 "unmap": false, 00:24:07.933 "flush": true, 00:24:07.933 "reset": true, 00:24:07.933 "nvme_admin": true, 00:24:07.933 "nvme_io": true, 00:24:07.933 "nvme_io_md": false, 00:24:07.933 "write_zeroes": true, 00:24:07.933 "zcopy": false, 00:24:07.933 "get_zone_info": false, 00:24:07.933 "zone_management": false, 00:24:07.933 "zone_append": false, 00:24:07.933 "compare": true, 00:24:07.933 "compare_and_write": true, 00:24:07.933 "abort": true, 00:24:07.933 "seek_hole": false, 00:24:07.933 "seek_data": false, 00:24:07.933 "copy": true, 00:24:07.933 "nvme_iov_md": false 00:24:07.933 }, 00:24:07.933 "memory_domains": [ 00:24:07.933 { 00:24:07.933 "dma_device_id": "system", 00:24:07.933 "dma_device_type": 1 00:24:07.933 } 00:24:07.933 ], 00:24:07.933 "driver_specific": { 00:24:07.933 "nvme": [ 00:24:07.933 { 00:24:07.933 "trid": { 00:24:07.933 "trtype": "TCP", 00:24:07.933 "adrfam": "IPv4", 00:24:07.933 "traddr": "10.0.0.2", 00:24:07.933 "trsvcid": "4420", 00:24:07.933 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:07.933 }, 00:24:07.933 "ctrlr_data": { 00:24:07.933 "cntlid": 1, 00:24:07.933 "vendor_id": "0x8086", 00:24:07.933 "model_number": "SPDK bdev Controller", 00:24:07.933 "serial_number": "00000000000000000000", 00:24:07.933 "firmware_revision": "24.09", 00:24:07.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.933 "oacs": { 00:24:07.933 "security": 0, 00:24:07.933 "format": 0, 00:24:07.933 "firmware": 0, 00:24:07.933 "ns_manage": 0 00:24:07.933 }, 00:24:07.933 "multi_ctrlr": true, 00:24:07.933 "ana_reporting": false 00:24:07.933 }, 00:24:07.933 "vs": { 00:24:07.933 "nvme_version": "1.3" 00:24:07.933 }, 00:24:07.933 "ns_data": { 00:24:07.933 "id": 1, 00:24:07.933 "can_share": true 00:24:07.933 } 00:24:07.933 } 00:24:07.933 ], 00:24:07.933 "mp_policy": "active_passive" 00:24:07.933 } 00:24:07.933 } 00:24:07.933 ] 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.933 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:07.933 [2024-07-24 19:16:13.548931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:07.933 [2024-07-24 19:16:13.549136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee700 (9): Bad file descriptor 00:24:08.191 [2024-07-24 19:16:13.681811] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.191 [ 00:24:08.191 { 00:24:08.191 "name": "nvme0n1", 00:24:08.191 "aliases": [ 00:24:08.191 "9cec0f4b-5fc8-422f-9fe6-d8c6b1877383" 00:24:08.191 ], 00:24:08.191 "product_name": "NVMe disk", 00:24:08.191 "block_size": 512, 00:24:08.191 "num_blocks": 2097152, 00:24:08.191 "uuid": "9cec0f4b-5fc8-422f-9fe6-d8c6b1877383", 00:24:08.191 "assigned_rate_limits": { 00:24:08.191 "rw_ios_per_sec": 0, 00:24:08.191 "rw_mbytes_per_sec": 0, 00:24:08.191 "r_mbytes_per_sec": 0, 00:24:08.191 "w_mbytes_per_sec": 0 00:24:08.191 }, 00:24:08.191 "claimed": false, 00:24:08.191 "zoned": false, 00:24:08.191 "supported_io_types": { 00:24:08.191 "read": true, 00:24:08.191 "write": true, 00:24:08.191 "unmap": false, 00:24:08.191 "flush": true, 00:24:08.191 "reset": true, 00:24:08.191 "nvme_admin": true, 00:24:08.191 "nvme_io": true, 00:24:08.191 "nvme_io_md": false, 00:24:08.191 "write_zeroes": true, 00:24:08.191 "zcopy": false, 00:24:08.191 "get_zone_info": false, 00:24:08.191 "zone_management": false, 00:24:08.191 "zone_append": false, 00:24:08.191 "compare": true, 00:24:08.191 "compare_and_write": true, 00:24:08.191 "abort": true, 00:24:08.191 "seek_hole": false, 00:24:08.191 "seek_data": false, 00:24:08.191 "copy": true, 00:24:08.191 "nvme_iov_md": false 00:24:08.191 }, 00:24:08.191 "memory_domains": [ 00:24:08.191 { 00:24:08.191 "dma_device_id": "system", 00:24:08.191 "dma_device_type": 1 00:24:08.191 } 00:24:08.191 ], 00:24:08.191 "driver_specific": { 00:24:08.191 "nvme": [ 00:24:08.191 { 00:24:08.191 "trid": { 00:24:08.191 "trtype": "TCP", 00:24:08.191 "adrfam": "IPv4", 00:24:08.191 "traddr": "10.0.0.2", 00:24:08.191 "trsvcid": "4420", 00:24:08.191 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:08.191 }, 00:24:08.191 "ctrlr_data": { 00:24:08.191 "cntlid": 2, 00:24:08.191 "vendor_id": "0x8086", 00:24:08.191 "model_number": "SPDK bdev Controller", 00:24:08.191 "serial_number": "00000000000000000000", 00:24:08.191 "firmware_revision": "24.09", 00:24:08.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.191 "oacs": { 00:24:08.191 "security": 0, 00:24:08.191 "format": 0, 00:24:08.191 "firmware": 0, 00:24:08.191 "ns_manage": 0 00:24:08.191 }, 00:24:08.191 "multi_ctrlr": true, 00:24:08.191 "ana_reporting": false 00:24:08.191 }, 00:24:08.191 "vs": { 00:24:08.191 "nvme_version": "1.3" 00:24:08.191 }, 00:24:08.191 "ns_data": { 00:24:08.191 "id": 1, 00:24:08.191 "can_share": true 00:24:08.191 } 00:24:08.191 } 00:24:08.191 ], 00:24:08.191 "mp_policy": "active_passive" 00:24:08.191 } 00:24:08.191 } 00:24:08.191 ] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GQL6KGy6E7 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GQL6KGy6E7 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.191 [2024-07-24 19:16:13.741806] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:08.191 [2024-07-24 19:16:13.742100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GQL6KGy6E7 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.191 [2024-07-24 19:16:13.749830] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GQL6KGy6E7 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.191 [2024-07-24 19:16:13.757906] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.191 [2024-07-24 19:16:13.758041] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:08.191 nvme0n1 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.191 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.191 [ 00:24:08.191 { 00:24:08.191 "name": "nvme0n1", 00:24:08.191 "aliases": [ 00:24:08.191 "9cec0f4b-5fc8-422f-9fe6-d8c6b1877383" 00:24:08.191 ], 00:24:08.191 "product_name": "NVMe disk", 00:24:08.192 "block_size": 512, 00:24:08.192 "num_blocks": 2097152, 00:24:08.192 "uuid": "9cec0f4b-5fc8-422f-9fe6-d8c6b1877383", 00:24:08.192 "assigned_rate_limits": { 00:24:08.192 "rw_ios_per_sec": 0, 00:24:08.192 "rw_mbytes_per_sec": 0, 00:24:08.192 "r_mbytes_per_sec": 0, 00:24:08.192 "w_mbytes_per_sec": 0 00:24:08.192 }, 00:24:08.192 "claimed": false, 00:24:08.192 "zoned": false, 00:24:08.192 "supported_io_types": { 00:24:08.192 "read": true, 00:24:08.192 "write": true, 00:24:08.192 "unmap": false, 00:24:08.192 "flush": true, 00:24:08.192 "reset": true, 00:24:08.192 "nvme_admin": true, 00:24:08.192 "nvme_io": true, 00:24:08.192 "nvme_io_md": false, 00:24:08.192 "write_zeroes": true, 00:24:08.192 "zcopy": false, 00:24:08.192 "get_zone_info": false, 00:24:08.192 "zone_management": false, 00:24:08.192 "zone_append": false, 00:24:08.192 "compare": true, 00:24:08.192 "compare_and_write": true, 00:24:08.192 "abort": true, 00:24:08.192 "seek_hole": false, 00:24:08.192 "seek_data": false, 00:24:08.192 "copy": true, 00:24:08.192 "nvme_iov_md": false 00:24:08.192 }, 00:24:08.192 "memory_domains": [ 00:24:08.192 { 00:24:08.192 "dma_device_id": "system", 00:24:08.192 "dma_device_type": 1 00:24:08.192 } 00:24:08.192 ], 00:24:08.192 "driver_specific": { 00:24:08.192 "nvme": [ 00:24:08.192 { 00:24:08.192 "trid": { 00:24:08.192 "trtype": "TCP", 00:24:08.192 "adrfam": "IPv4", 00:24:08.192 "traddr": "10.0.0.2", 00:24:08.192 "trsvcid": "4421", 00:24:08.192 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:08.192 }, 00:24:08.192 "ctrlr_data": { 00:24:08.192 "cntlid": 3, 00:24:08.192 "vendor_id": "0x8086", 00:24:08.192 "model_number": "SPDK bdev Controller", 00:24:08.192 "serial_number": "00000000000000000000", 00:24:08.192 "firmware_revision": "24.09", 00:24:08.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:08.192 "oacs": { 00:24:08.192 "security": 0, 00:24:08.192 "format": 0, 00:24:08.192 "firmware": 0, 00:24:08.192 "ns_manage": 0 00:24:08.192 }, 00:24:08.192 "multi_ctrlr": true, 00:24:08.192 "ana_reporting": false 00:24:08.192 }, 00:24:08.192 "vs": { 00:24:08.192 "nvme_version": "1.3" 00:24:08.192 }, 00:24:08.192 "ns_data": { 00:24:08.192 "id": 1, 00:24:08.192 "can_share": true 00:24:08.192 } 00:24:08.192 } 00:24:08.192 ], 00:24:08.192 "mp_policy": "active_passive" 00:24:08.192 } 00:24:08.192 } 00:24:08.192 ] 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.GQL6KGy6E7 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:08.192 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:08.192 rmmod nvme_tcp 00:24:08.451 rmmod nvme_fabrics 00:24:08.451 rmmod nvme_keyring 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1716793 ']' 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1716793 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1716793 ']' 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1716793 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1716793 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1716793' 00:24:08.451 killing process with pid 1716793 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1716793 00:24:08.451 [2024-07-24 19:16:13.989588] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:08.451 [2024-07-24 19:16:13.989635] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:08.451 19:16:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1716793 00:24:08.709 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:08.709 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:08.709 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:08.709 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.709 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.709 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.710 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.710 19:16:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.246 00:24:11.246 real 0m6.922s 00:24:11.246 user 0m2.970s 00:24:11.246 sys 0m2.683s 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.246 ************************************ 00:24:11.246 END TEST nvmf_async_init 00:24:11.246 ************************************ 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.246 ************************************ 00:24:11.246 START TEST dma 00:24:11.246 ************************************ 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:11.246 * Looking for test storage... 00:24:11.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:11.246 00:24:11.246 real 0m0.083s 00:24:11.246 user 0m0.040s 00:24:11.246 sys 0m0.051s 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:11.246 ************************************ 00:24:11.246 END TEST dma 00:24:11.246 ************************************ 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.246 ************************************ 00:24:11.246 START TEST nvmf_identify 00:24:11.246 ************************************ 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:11.246 * Looking for test storage... 00:24:11.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.246 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.247 19:16:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:14.532 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:14.533 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:14.533 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:14.533 Found net devices under 0000:84:00.0: cvl_0_0 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:14.533 Found net devices under 0000:84:00.1: cvl_0_1 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:14.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:24:14.533 00:24:14.533 --- 10.0.0.2 ping statistics --- 00:24:14.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.533 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:24:14.533 00:24:14.533 --- 10.0.0.1 ping statistics --- 00:24:14.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.533 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1719060 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1719060 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1719060 ']' 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.533 19:16:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:14.533 [2024-07-24 19:16:19.775359] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:24:14.534 [2024-07-24 19:16:19.775463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.534 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.534 [2024-07-24 19:16:19.877409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.534 [2024-07-24 19:16:20.086554] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.534 [2024-07-24 19:16:20.086631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.534 [2024-07-24 19:16:20.086651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.534 [2024-07-24 19:16:20.086667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.534 [2024-07-24 19:16:20.086682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.534 [2024-07-24 19:16:20.086836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.534 [2024-07-24 19:16:20.086901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.534 [2024-07-24 19:16:20.086957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.534 [2024-07-24 19:16:20.086961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 [2024-07-24 19:16:20.923110] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 Malloc0 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.467 19:16:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 [2024-07-24 19:16:20.998955] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.467 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.467 [ 00:24:15.467 { 00:24:15.467 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:15.467 "subtype": "Discovery", 00:24:15.467 "listen_addresses": [ 00:24:15.467 { 00:24:15.467 "trtype": "TCP", 00:24:15.467 "adrfam": "IPv4", 00:24:15.467 "traddr": "10.0.0.2", 00:24:15.467 "trsvcid": "4420" 00:24:15.467 } 00:24:15.467 ], 00:24:15.467 "allow_any_host": true, 00:24:15.467 "hosts": [] 00:24:15.467 }, 00:24:15.467 { 00:24:15.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:15.467 "subtype": "NVMe", 00:24:15.467 "listen_addresses": [ 00:24:15.467 { 00:24:15.467 "trtype": "TCP", 00:24:15.467 "adrfam": "IPv4", 00:24:15.467 "traddr": "10.0.0.2", 00:24:15.467 "trsvcid": "4420" 00:24:15.467 } 00:24:15.467 ], 00:24:15.468 "allow_any_host": true, 00:24:15.468 "hosts": [], 00:24:15.468 "serial_number": "SPDK00000000000001", 00:24:15.468 "model_number": "SPDK bdev Controller", 00:24:15.468 "max_namespaces": 32, 00:24:15.468 "min_cntlid": 1, 00:24:15.468 "max_cntlid": 65519, 00:24:15.468 "namespaces": [ 00:24:15.468 { 00:24:15.468 "nsid": 1, 00:24:15.468 "bdev_name": "Malloc0", 00:24:15.468 "name": "Malloc0", 00:24:15.468 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:15.468 "eui64": "ABCDEF0123456789", 00:24:15.468 "uuid": "2def0275-5389-4387-aed5-0e5bc4ae04c8" 00:24:15.468 } 00:24:15.468 ] 00:24:15.468 } 00:24:15.468 ] 00:24:15.468 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.468 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:15.468 [2024-07-24 19:16:21.042810] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:24:15.468 [2024-07-24 19:16:21.042855] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719221 ] 00:24:15.468 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.468 [2024-07-24 19:16:21.086562] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:15.468 [2024-07-24 19:16:21.086637] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:15.468 [2024-07-24 19:16:21.086651] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:15.468 [2024-07-24 19:16:21.086671] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:15.468 [2024-07-24 19:16:21.086689] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:15.468 [2024-07-24 19:16:21.087045] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:15.468 [2024-07-24 19:16:21.087113] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ca3540 0 00:24:15.468 [2024-07-24 19:16:21.094309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:15.468 [2024-07-24 19:16:21.094343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:15.468 [2024-07-24 19:16:21.094356] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:15.468 [2024-07-24 19:16:21.094365] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:15.468 [2024-07-24 19:16:21.094444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.094462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.094471] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.468 [2024-07-24 19:16:21.094495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:15.468 [2024-07-24 19:16:21.094530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.468 [2024-07-24 19:16:21.098449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.468 [2024-07-24 19:16:21.098473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.468 [2024-07-24 19:16:21.098483] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.098494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.468 [2024-07-24 19:16:21.098521] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:15.468 [2024-07-24 19:16:21.098538] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:15.468 [2024-07-24 19:16:21.098551] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:15.468 [2024-07-24 19:16:21.098582] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.098594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.098603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.468 [2024-07-24 19:16:21.098618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.468 [2024-07-24 19:16:21.098652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.468 [2024-07-24 19:16:21.098830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.468 [2024-07-24 19:16:21.098847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.468 [2024-07-24 19:16:21.098856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.098866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.468 [2024-07-24 19:16:21.098883] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:15.468 [2024-07-24 19:16:21.098909] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:15.468 [2024-07-24 19:16:21.098926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.098936] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.098944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.468 [2024-07-24 19:16:21.098959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.468 [2024-07-24 19:16:21.098988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.468 [2024-07-24 19:16:21.099155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.468 [2024-07-24 19:16:21.099176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.468 [2024-07-24 19:16:21.099185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.468 [2024-07-24 19:16:21.099206] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:15.468 [2024-07-24 19:16:21.099225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:15.468 [2024-07-24 19:16:21.099241] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.468 [2024-07-24 19:16:21.099274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.468 [2024-07-24 19:16:21.099303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.468 [2024-07-24 19:16:21.099455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.468 [2024-07-24 19:16:21.099476] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.468 [2024-07-24 19:16:21.099486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099495] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.468 [2024-07-24 19:16:21.099507] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:15.468 [2024-07-24 19:16:21.099529] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.468 [2024-07-24 19:16:21.099564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.468 [2024-07-24 19:16:21.099593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.468 [2024-07-24 19:16:21.099732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.468 [2024-07-24 19:16:21.099748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.468 [2024-07-24 19:16:21.099758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.468 [2024-07-24 19:16:21.099777] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:15.468 [2024-07-24 19:16:21.099788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:15.468 [2024-07-24 19:16:21.099812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:15.468 [2024-07-24 19:16:21.099926] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:15.468 [2024-07-24 19:16:21.099937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:15.468 [2024-07-24 19:16:21.099955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.099973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.468 [2024-07-24 19:16:21.099988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.468 [2024-07-24 19:16:21.100016] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.468 [2024-07-24 19:16:21.100154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.468 [2024-07-24 19:16:21.100175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.468 [2024-07-24 19:16:21.100184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.100194] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.468 [2024-07-24 19:16:21.100204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:15.468 [2024-07-24 19:16:21.100227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.100239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.468 [2024-07-24 19:16:21.100247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.468 [2024-07-24 19:16:21.100261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.468 [2024-07-24 19:16:21.100290] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.468 [2024-07-24 19:16:21.100437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.468 [2024-07-24 19:16:21.100459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.469 [2024-07-24 19:16:21.100468] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.100477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.469 [2024-07-24 19:16:21.100487] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:15.469 [2024-07-24 19:16:21.100498] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:15.469 [2024-07-24 19:16:21.100517] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:15.469 [2024-07-24 19:16:21.100535] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:15.469 [2024-07-24 19:16:21.100556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.100566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.100581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.469 [2024-07-24 19:16:21.100611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.469 [2024-07-24 19:16:21.100831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.469 [2024-07-24 19:16:21.100848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.469 [2024-07-24 19:16:21.100863] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.100873] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca3540): datao=0, datal=4096, cccid=0 00:24:15.469 [2024-07-24 19:16:21.100884] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d033c0) on tqpair(0x1ca3540): expected_datao=0, payload_size=4096 00:24:15.469 [2024-07-24 19:16:21.100894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.100918] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.100931] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.141557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.469 [2024-07-24 19:16:21.141583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.469 [2024-07-24 19:16:21.141593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.141603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.469 [2024-07-24 19:16:21.141618] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:15.469 [2024-07-24 19:16:21.141630] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:15.469 [2024-07-24 19:16:21.141640] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:15.469 [2024-07-24 19:16:21.141652] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:15.469 [2024-07-24 19:16:21.141662] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:15.469 [2024-07-24 19:16:21.141673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:15.469 [2024-07-24 19:16:21.141693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:15.469 [2024-07-24 19:16:21.141716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.141728] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.141736] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.141752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:15.469 [2024-07-24 19:16:21.141783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.469 [2024-07-24 19:16:21.141928] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.469 [2024-07-24 19:16:21.141945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.469 [2024-07-24 19:16:21.141954] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.141963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.469 [2024-07-24 19:16:21.141978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.141988] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.141996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.142010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.469 [2024-07-24 19:16:21.142023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.142053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.469 [2024-07-24 19:16:21.142072] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142082] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.142102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.469 [2024-07-24 19:16:21.142115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.142144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.469 [2024-07-24 19:16:21.142156] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:15.469 [2024-07-24 19:16:21.142182] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:15.469 [2024-07-24 19:16:21.142199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.142223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.469 [2024-07-24 19:16:21.142253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d033c0, cid 0, qid 0 00:24:15.469 [2024-07-24 19:16:21.142268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03540, cid 1, qid 0 00:24:15.469 [2024-07-24 19:16:21.142279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d036c0, cid 2, qid 0 00:24:15.469 [2024-07-24 19:16:21.142289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.469 [2024-07-24 19:16:21.142299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d039c0, cid 4, qid 0 00:24:15.469 [2024-07-24 19:16:21.142498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.469 [2024-07-24 19:16:21.142517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.469 [2024-07-24 19:16:21.142527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d039c0) on tqpair=0x1ca3540 00:24:15.469 [2024-07-24 19:16:21.142548] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:15.469 [2024-07-24 19:16:21.142559] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:15.469 [2024-07-24 19:16:21.142584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.142611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.469 [2024-07-24 19:16:21.142640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d039c0, cid 4, qid 0 00:24:15.469 [2024-07-24 19:16:21.142801] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.469 [2024-07-24 19:16:21.142821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.469 [2024-07-24 19:16:21.142831] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca3540): datao=0, datal=4096, cccid=4 00:24:15.469 [2024-07-24 19:16:21.142850] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d039c0) on tqpair(0x1ca3540): expected_datao=0, payload_size=4096 00:24:15.469 [2024-07-24 19:16:21.142865] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142890] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142902] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.469 [2024-07-24 19:16:21.142979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.469 [2024-07-24 19:16:21.142988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.142997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d039c0) on tqpair=0x1ca3540 00:24:15.469 [2024-07-24 19:16:21.143022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:15.469 [2024-07-24 19:16:21.143070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.469 [2024-07-24 19:16:21.143084] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca3540) 00:24:15.469 [2024-07-24 19:16:21.143098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.469 [2024-07-24 19:16:21.143113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.470 [2024-07-24 19:16:21.143122] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.470 [2024-07-24 19:16:21.143131] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ca3540) 00:24:15.470 [2024-07-24 19:16:21.143142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.470 [2024-07-24 19:16:21.143179] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d039c0, cid 4, qid 0 00:24:15.470 [2024-07-24 19:16:21.143194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03b40, cid 5, qid 0 00:24:15.470 [2024-07-24 19:16:21.143393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.470 [2024-07-24 19:16:21.143414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.470 [2024-07-24 19:16:21.143423] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.470 [2024-07-24 19:16:21.147446] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca3540): datao=0, datal=1024, cccid=4 00:24:15.470 [2024-07-24 19:16:21.147461] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d039c0) on tqpair(0x1ca3540): expected_datao=0, payload_size=1024 00:24:15.470 [2024-07-24 19:16:21.147471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.470 [2024-07-24 19:16:21.147486] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.470 [2024-07-24 19:16:21.147496] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.470 [2024-07-24 19:16:21.147508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.470 [2024-07-24 19:16:21.147520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.470 [2024-07-24 19:16:21.147530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.470 [2024-07-24 19:16:21.147539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03b40) on tqpair=0x1ca3540 00:24:15.732 [2024-07-24 19:16:21.187449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.732 [2024-07-24 19:16:21.187479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.732 [2024-07-24 19:16:21.187490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.187500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d039c0) on tqpair=0x1ca3540 00:24:15.732 [2024-07-24 19:16:21.187524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.187537] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca3540) 00:24:15.732 [2024-07-24 19:16:21.187553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.732 [2024-07-24 19:16:21.187605] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d039c0, cid 4, qid 0 00:24:15.732 [2024-07-24 19:16:21.187779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.732 [2024-07-24 19:16:21.187797] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.732 [2024-07-24 19:16:21.187806] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.187814] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca3540): datao=0, datal=3072, cccid=4 00:24:15.732 [2024-07-24 19:16:21.187825] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d039c0) on tqpair(0x1ca3540): expected_datao=0, payload_size=3072 00:24:15.732 [2024-07-24 19:16:21.187835] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.187863] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.187876] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.228557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.732 [2024-07-24 19:16:21.228583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.732 [2024-07-24 19:16:21.228594] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.228603] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d039c0) on tqpair=0x1ca3540 00:24:15.732 [2024-07-24 19:16:21.228625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.228637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca3540) 00:24:15.732 [2024-07-24 19:16:21.228653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.732 [2024-07-24 19:16:21.228694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d039c0, cid 4, qid 0 00:24:15.732 [2024-07-24 19:16:21.228850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.732 [2024-07-24 19:16:21.228871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.732 [2024-07-24 19:16:21.228880] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.228888] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca3540): datao=0, datal=8, cccid=4 00:24:15.732 [2024-07-24 19:16:21.228899] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d039c0) on tqpair(0x1ca3540): expected_datao=0, payload_size=8 00:24:15.732 [2024-07-24 19:16:21.228909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.228922] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.228932] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.269557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.732 [2024-07-24 19:16:21.269581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.732 [2024-07-24 19:16:21.269591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.732 [2024-07-24 19:16:21.269600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d039c0) on tqpair=0x1ca3540 00:24:15.732 ===================================================== 00:24:15.732 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:15.732 ===================================================== 00:24:15.732 Controller Capabilities/Features 00:24:15.732 ================================ 00:24:15.732 Vendor ID: 0000 00:24:15.732 Subsystem Vendor ID: 0000 00:24:15.732 Serial Number: .................... 00:24:15.732 Model Number: ........................................ 00:24:15.732 Firmware Version: 24.09 00:24:15.732 Recommended Arb Burst: 0 00:24:15.732 IEEE OUI Identifier: 00 00 00 00:24:15.732 Multi-path I/O 00:24:15.732 May have multiple subsystem ports: No 00:24:15.732 May have multiple controllers: No 00:24:15.732 Associated with SR-IOV VF: No 00:24:15.732 Max Data Transfer Size: 131072 00:24:15.732 Max Number of Namespaces: 0 00:24:15.732 Max Number of I/O Queues: 1024 00:24:15.732 NVMe Specification Version (VS): 1.3 00:24:15.732 NVMe Specification Version (Identify): 1.3 00:24:15.732 Maximum Queue Entries: 128 00:24:15.732 Contiguous Queues Required: Yes 00:24:15.732 Arbitration Mechanisms Supported 00:24:15.732 Weighted Round Robin: Not Supported 00:24:15.732 Vendor Specific: Not Supported 00:24:15.732 Reset Timeout: 15000 ms 00:24:15.732 Doorbell Stride: 4 bytes 00:24:15.732 NVM Subsystem Reset: Not Supported 00:24:15.732 Command Sets Supported 00:24:15.732 NVM Command Set: Supported 00:24:15.732 Boot Partition: Not Supported 00:24:15.732 Memory Page Size Minimum: 4096 bytes 00:24:15.732 Memory Page Size Maximum: 4096 bytes 00:24:15.732 Persistent Memory Region: Not Supported 00:24:15.732 Optional Asynchronous Events Supported 00:24:15.732 Namespace Attribute Notices: Not Supported 00:24:15.732 Firmware Activation Notices: Not Supported 00:24:15.732 ANA Change Notices: Not Supported 00:24:15.732 PLE Aggregate Log Change Notices: Not Supported 00:24:15.732 LBA Status Info Alert Notices: Not Supported 00:24:15.732 EGE Aggregate Log Change Notices: Not Supported 00:24:15.732 Normal NVM Subsystem Shutdown event: Not Supported 00:24:15.732 Zone Descriptor Change Notices: Not Supported 00:24:15.732 Discovery Log Change Notices: Supported 00:24:15.732 Controller Attributes 00:24:15.732 128-bit Host Identifier: Not Supported 00:24:15.732 Non-Operational Permissive Mode: Not Supported 00:24:15.732 NVM Sets: Not Supported 00:24:15.732 Read Recovery Levels: Not Supported 00:24:15.732 Endurance Groups: Not Supported 00:24:15.732 Predictable Latency Mode: Not Supported 00:24:15.732 Traffic Based Keep ALive: Not Supported 00:24:15.732 Namespace Granularity: Not Supported 00:24:15.732 SQ Associations: Not Supported 00:24:15.732 UUID List: Not Supported 00:24:15.732 Multi-Domain Subsystem: Not Supported 00:24:15.732 Fixed Capacity Management: Not Supported 00:24:15.732 Variable Capacity Management: Not Supported 00:24:15.732 Delete Endurance Group: Not Supported 00:24:15.732 Delete NVM Set: Not Supported 00:24:15.732 Extended LBA Formats Supported: Not Supported 00:24:15.732 Flexible Data Placement Supported: Not Supported 00:24:15.732 00:24:15.732 Controller Memory Buffer Support 00:24:15.732 ================================ 00:24:15.732 Supported: No 00:24:15.732 00:24:15.732 Persistent Memory Region Support 00:24:15.732 ================================ 00:24:15.732 Supported: No 00:24:15.732 00:24:15.732 Admin Command Set Attributes 00:24:15.732 ============================ 00:24:15.732 Security Send/Receive: Not Supported 00:24:15.732 Format NVM: Not Supported 00:24:15.732 Firmware Activate/Download: Not Supported 00:24:15.732 Namespace Management: Not Supported 00:24:15.732 Device Self-Test: Not Supported 00:24:15.732 Directives: Not Supported 00:24:15.732 NVMe-MI: Not Supported 00:24:15.732 Virtualization Management: Not Supported 00:24:15.732 Doorbell Buffer Config: Not Supported 00:24:15.732 Get LBA Status Capability: Not Supported 00:24:15.732 Command & Feature Lockdown Capability: Not Supported 00:24:15.732 Abort Command Limit: 1 00:24:15.732 Async Event Request Limit: 4 00:24:15.732 Number of Firmware Slots: N/A 00:24:15.732 Firmware Slot 1 Read-Only: N/A 00:24:15.732 Firmware Activation Without Reset: N/A 00:24:15.733 Multiple Update Detection Support: N/A 00:24:15.733 Firmware Update Granularity: No Information Provided 00:24:15.733 Per-Namespace SMART Log: No 00:24:15.733 Asymmetric Namespace Access Log Page: Not Supported 00:24:15.733 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:15.733 Command Effects Log Page: Not Supported 00:24:15.733 Get Log Page Extended Data: Supported 00:24:15.733 Telemetry Log Pages: Not Supported 00:24:15.733 Persistent Event Log Pages: Not Supported 00:24:15.733 Supported Log Pages Log Page: May Support 00:24:15.733 Commands Supported & Effects Log Page: Not Supported 00:24:15.733 Feature Identifiers & Effects Log Page:May Support 00:24:15.733 NVMe-MI Commands & Effects Log Page: May Support 00:24:15.733 Data Area 4 for Telemetry Log: Not Supported 00:24:15.733 Error Log Page Entries Supported: 128 00:24:15.733 Keep Alive: Not Supported 00:24:15.733 00:24:15.733 NVM Command Set Attributes 00:24:15.733 ========================== 00:24:15.733 Submission Queue Entry Size 00:24:15.733 Max: 1 00:24:15.733 Min: 1 00:24:15.733 Completion Queue Entry Size 00:24:15.733 Max: 1 00:24:15.733 Min: 1 00:24:15.733 Number of Namespaces: 0 00:24:15.733 Compare Command: Not Supported 00:24:15.733 Write Uncorrectable Command: Not Supported 00:24:15.733 Dataset Management Command: Not Supported 00:24:15.733 Write Zeroes Command: Not Supported 00:24:15.733 Set Features Save Field: Not Supported 00:24:15.733 Reservations: Not Supported 00:24:15.733 Timestamp: Not Supported 00:24:15.733 Copy: Not Supported 00:24:15.733 Volatile Write Cache: Not Present 00:24:15.733 Atomic Write Unit (Normal): 1 00:24:15.733 Atomic Write Unit (PFail): 1 00:24:15.733 Atomic Compare & Write Unit: 1 00:24:15.733 Fused Compare & Write: Supported 00:24:15.733 Scatter-Gather List 00:24:15.733 SGL Command Set: Supported 00:24:15.733 SGL Keyed: Supported 00:24:15.733 SGL Bit Bucket Descriptor: Not Supported 00:24:15.733 SGL Metadata Pointer: Not Supported 00:24:15.733 Oversized SGL: Not Supported 00:24:15.733 SGL Metadata Address: Not Supported 00:24:15.733 SGL Offset: Supported 00:24:15.733 Transport SGL Data Block: Not Supported 00:24:15.733 Replay Protected Memory Block: Not Supported 00:24:15.733 00:24:15.733 Firmware Slot Information 00:24:15.733 ========================= 00:24:15.733 Active slot: 0 00:24:15.733 00:24:15.733 00:24:15.733 Error Log 00:24:15.733 ========= 00:24:15.733 00:24:15.733 Active Namespaces 00:24:15.733 ================= 00:24:15.733 Discovery Log Page 00:24:15.733 ================== 00:24:15.733 Generation Counter: 2 00:24:15.733 Number of Records: 2 00:24:15.733 Record Format: 0 00:24:15.733 00:24:15.733 Discovery Log Entry 0 00:24:15.733 ---------------------- 00:24:15.733 Transport Type: 3 (TCP) 00:24:15.733 Address Family: 1 (IPv4) 00:24:15.733 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:15.733 Entry Flags: 00:24:15.733 Duplicate Returned Information: 1 00:24:15.733 Explicit Persistent Connection Support for Discovery: 1 00:24:15.733 Transport Requirements: 00:24:15.733 Secure Channel: Not Required 00:24:15.733 Port ID: 0 (0x0000) 00:24:15.733 Controller ID: 65535 (0xffff) 00:24:15.733 Admin Max SQ Size: 128 00:24:15.733 Transport Service Identifier: 4420 00:24:15.733 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:15.733 Transport Address: 10.0.0.2 00:24:15.733 Discovery Log Entry 1 00:24:15.733 ---------------------- 00:24:15.733 Transport Type: 3 (TCP) 00:24:15.733 Address Family: 1 (IPv4) 00:24:15.733 Subsystem Type: 2 (NVM Subsystem) 00:24:15.733 Entry Flags: 00:24:15.733 Duplicate Returned Information: 0 00:24:15.733 Explicit Persistent Connection Support for Discovery: 0 00:24:15.733 Transport Requirements: 00:24:15.733 Secure Channel: Not Required 00:24:15.733 Port ID: 0 (0x0000) 00:24:15.733 Controller ID: 65535 (0xffff) 00:24:15.733 Admin Max SQ Size: 128 00:24:15.733 Transport Service Identifier: 4420 00:24:15.733 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:15.733 Transport Address: 10.0.0.2 [2024-07-24 19:16:21.269753] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:15.733 [2024-07-24 19:16:21.269782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d033c0) on tqpair=0x1ca3540 00:24:15.733 [2024-07-24 19:16:21.269797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.733 [2024-07-24 19:16:21.269809] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03540) on tqpair=0x1ca3540 00:24:15.733 [2024-07-24 19:16:21.269820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.733 [2024-07-24 19:16:21.269830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d036c0) on tqpair=0x1ca3540 00:24:15.733 [2024-07-24 19:16:21.269845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.733 [2024-07-24 19:16:21.269857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.733 [2024-07-24 19:16:21.269867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.733 [2024-07-24 19:16:21.269890] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.269901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.269910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.733 [2024-07-24 19:16:21.269925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.733 [2024-07-24 19:16:21.269959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.733 [2024-07-24 19:16:21.270093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.733 [2024-07-24 19:16:21.270114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.733 [2024-07-24 19:16:21.270123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.733 [2024-07-24 19:16:21.270148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.733 [2024-07-24 19:16:21.270181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.733 [2024-07-24 19:16:21.270218] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.733 [2024-07-24 19:16:21.270389] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.733 [2024-07-24 19:16:21.270409] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.733 [2024-07-24 19:16:21.270418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.733 [2024-07-24 19:16:21.270453] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:15.733 [2024-07-24 19:16:21.270464] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:15.733 [2024-07-24 19:16:21.270486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.733 [2024-07-24 19:16:21.270521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.733 [2024-07-24 19:16:21.270550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.733 [2024-07-24 19:16:21.270696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.733 [2024-07-24 19:16:21.270717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.733 [2024-07-24 19:16:21.270726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.733 [2024-07-24 19:16:21.270758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.733 [2024-07-24 19:16:21.270770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.270779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.734 [2024-07-24 19:16:21.270792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.270827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.734 [2024-07-24 19:16:21.270965] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.270982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.270991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.271000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.734 [2024-07-24 19:16:21.271022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.271034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.271042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.734 [2024-07-24 19:16:21.271056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.271084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.734 [2024-07-24 19:16:21.271218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.271239] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.271248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.271257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.734 [2024-07-24 19:16:21.271279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.271291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.271300] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.734 [2024-07-24 19:16:21.271314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.271342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.734 [2024-07-24 19:16:21.275446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.275469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.275479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.275489] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.734 [2024-07-24 19:16:21.275513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.275526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.275535] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca3540) 00:24:15.734 [2024-07-24 19:16:21.275549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.275580] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d03840, cid 3, qid 0 00:24:15.734 [2024-07-24 19:16:21.275723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.275744] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.275753] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.275762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d03840) on tqpair=0x1ca3540 00:24:15.734 [2024-07-24 19:16:21.275780] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:24:15.734 00:24:15.734 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:15.734 [2024-07-24 19:16:21.320674] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:24:15.734 [2024-07-24 19:16:21.320732] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719231 ] 00:24:15.734 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.734 [2024-07-24 19:16:21.365970] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:15.734 [2024-07-24 19:16:21.366037] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:15.734 [2024-07-24 19:16:21.366050] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:15.734 [2024-07-24 19:16:21.366068] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:15.734 [2024-07-24 19:16:21.366084] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:15.734 [2024-07-24 19:16:21.366323] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:15.734 [2024-07-24 19:16:21.366373] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f30540 0 00:24:15.734 [2024-07-24 19:16:21.372449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:15.734 [2024-07-24 19:16:21.372479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:15.734 [2024-07-24 19:16:21.372491] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:15.734 [2024-07-24 19:16:21.372500] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:15.734 [2024-07-24 19:16:21.372551] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.372566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.372576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.734 [2024-07-24 19:16:21.372595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:15.734 [2024-07-24 19:16:21.372630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.734 [2024-07-24 19:16:21.380448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.380472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.380482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.380491] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.734 [2024-07-24 19:16:21.380510] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:15.734 [2024-07-24 19:16:21.380525] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:15.734 [2024-07-24 19:16:21.380538] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:15.734 [2024-07-24 19:16:21.380565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.380578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.380586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.734 [2024-07-24 19:16:21.380602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.380635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.734 [2024-07-24 19:16:21.380784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.380806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.380820] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.380831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.734 [2024-07-24 19:16:21.380846] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:15.734 [2024-07-24 19:16:21.380867] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:15.734 [2024-07-24 19:16:21.380883] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.380894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.380902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.734 [2024-07-24 19:16:21.380917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.380947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.734 [2024-07-24 19:16:21.381085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.381106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.381115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.381124] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.734 [2024-07-24 19:16:21.381135] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:15.734 [2024-07-24 19:16:21.381155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:15.734 [2024-07-24 19:16:21.381172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.381182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.381191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.734 [2024-07-24 19:16:21.381205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.381235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.734 [2024-07-24 19:16:21.381382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.734 [2024-07-24 19:16:21.381399] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.734 [2024-07-24 19:16:21.381408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.381417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.734 [2024-07-24 19:16:21.381439] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:15.734 [2024-07-24 19:16:21.381464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.381477] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.734 [2024-07-24 19:16:21.381485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.734 [2024-07-24 19:16:21.381500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.734 [2024-07-24 19:16:21.381529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.735 [2024-07-24 19:16:21.381682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.735 [2024-07-24 19:16:21.381699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.735 [2024-07-24 19:16:21.381708] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.381718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.735 [2024-07-24 19:16:21.381728] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:15.735 [2024-07-24 19:16:21.381744] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:15.735 [2024-07-24 19:16:21.381763] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:15.735 [2024-07-24 19:16:21.381876] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:15.735 [2024-07-24 19:16:21.381886] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:15.735 [2024-07-24 19:16:21.381902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.381912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.381921] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.381935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.735 [2024-07-24 19:16:21.381965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.735 [2024-07-24 19:16:21.382103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.735 [2024-07-24 19:16:21.382123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.735 [2024-07-24 19:16:21.382133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.735 [2024-07-24 19:16:21.382152] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:15.735 [2024-07-24 19:16:21.382175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.382210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.735 [2024-07-24 19:16:21.382240] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.735 [2024-07-24 19:16:21.382405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.735 [2024-07-24 19:16:21.382422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.735 [2024-07-24 19:16:21.382441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.735 [2024-07-24 19:16:21.382461] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:15.735 [2024-07-24 19:16:21.382473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:15.735 [2024-07-24 19:16:21.382491] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:15.735 [2024-07-24 19:16:21.382510] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:15.735 [2024-07-24 19:16:21.382529] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.382554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.735 [2024-07-24 19:16:21.382585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.735 [2024-07-24 19:16:21.382807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.735 [2024-07-24 19:16:21.382825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.735 [2024-07-24 19:16:21.382834] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382842] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=4096, cccid=0 00:24:15.735 [2024-07-24 19:16:21.382852] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f903c0) on tqpair(0x1f30540): expected_datao=0, payload_size=4096 00:24:15.735 [2024-07-24 19:16:21.382863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382877] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382887] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.735 [2024-07-24 19:16:21.382919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.735 [2024-07-24 19:16:21.382928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.382937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.735 [2024-07-24 19:16:21.382951] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:15.735 [2024-07-24 19:16:21.382963] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:15.735 [2024-07-24 19:16:21.382973] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:15.735 [2024-07-24 19:16:21.382982] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:15.735 [2024-07-24 19:16:21.382992] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:15.735 [2024-07-24 19:16:21.383002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:15.735 [2024-07-24 19:16:21.383022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:15.735 [2024-07-24 19:16:21.383043] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.383078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:15.735 [2024-07-24 19:16:21.383108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.735 [2024-07-24 19:16:21.383272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.735 [2024-07-24 19:16:21.383293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.735 [2024-07-24 19:16:21.383302] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.735 [2024-07-24 19:16:21.383324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.383356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.735 [2024-07-24 19:16:21.383370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.383404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.735 [2024-07-24 19:16:21.383418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.383459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.735 [2024-07-24 19:16:21.383473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.383502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.735 [2024-07-24 19:16:21.383514] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:15.735 [2024-07-24 19:16:21.383540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:15.735 [2024-07-24 19:16:21.383557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.735 [2024-07-24 19:16:21.383567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f30540) 00:24:15.735 [2024-07-24 19:16:21.383580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.735 [2024-07-24 19:16:21.383612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f903c0, cid 0, qid 0 00:24:15.735 [2024-07-24 19:16:21.383628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90540, cid 1, qid 0 00:24:15.735 [2024-07-24 19:16:21.383639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f906c0, cid 2, qid 0 00:24:15.735 [2024-07-24 19:16:21.383649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.736 [2024-07-24 19:16:21.383659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f909c0, cid 4, qid 0 00:24:15.736 [2024-07-24 19:16:21.383838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.736 [2024-07-24 19:16:21.383858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.736 [2024-07-24 19:16:21.383867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.383877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f909c0) on tqpair=0x1f30540 00:24:15.736 [2024-07-24 19:16:21.383887] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:15.736 [2024-07-24 19:16:21.383899] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.383924] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.383940] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.383954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.383964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.383973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f30540) 00:24:15.736 [2024-07-24 19:16:21.383987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:15.736 [2024-07-24 19:16:21.384017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f909c0, cid 4, qid 0 00:24:15.736 [2024-07-24 19:16:21.384161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.736 [2024-07-24 19:16:21.384179] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.736 [2024-07-24 19:16:21.384188] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.384197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f909c0) on tqpair=0x1f30540 00:24:15.736 [2024-07-24 19:16:21.384289] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.384317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.384336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.384347] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f30540) 00:24:15.736 [2024-07-24 19:16:21.384361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.736 [2024-07-24 19:16:21.384391] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f909c0, cid 4, qid 0 00:24:15.736 [2024-07-24 19:16:21.388455] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.736 [2024-07-24 19:16:21.388477] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.736 [2024-07-24 19:16:21.388487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388495] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=4096, cccid=4 00:24:15.736 [2024-07-24 19:16:21.388506] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f909c0) on tqpair(0x1f30540): expected_datao=0, payload_size=4096 00:24:15.736 [2024-07-24 19:16:21.388516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388529] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388540] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.736 [2024-07-24 19:16:21.388564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.736 [2024-07-24 19:16:21.388572] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f909c0) on tqpair=0x1f30540 00:24:15.736 [2024-07-24 19:16:21.388601] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:15.736 [2024-07-24 19:16:21.388628] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.388653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.388672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f30540) 00:24:15.736 [2024-07-24 19:16:21.388697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.736 [2024-07-24 19:16:21.388728] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f909c0, cid 4, qid 0 00:24:15.736 [2024-07-24 19:16:21.388910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.736 [2024-07-24 19:16:21.388931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.736 [2024-07-24 19:16:21.388940] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388948] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=4096, cccid=4 00:24:15.736 [2024-07-24 19:16:21.388958] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f909c0) on tqpair(0x1f30540): expected_datao=0, payload_size=4096 00:24:15.736 [2024-07-24 19:16:21.388973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.388998] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389011] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.736 [2024-07-24 19:16:21.389101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.736 [2024-07-24 19:16:21.389110] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f909c0) on tqpair=0x1f30540 00:24:15.736 [2024-07-24 19:16:21.389148] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389204] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f30540) 00:24:15.736 [2024-07-24 19:16:21.389219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.736 [2024-07-24 19:16:21.389249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f909c0, cid 4, qid 0 00:24:15.736 [2024-07-24 19:16:21.389405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.736 [2024-07-24 19:16:21.389426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.736 [2024-07-24 19:16:21.389450] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389459] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=4096, cccid=4 00:24:15.736 [2024-07-24 19:16:21.389469] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f909c0) on tqpair(0x1f30540): expected_datao=0, payload_size=4096 00:24:15.736 [2024-07-24 19:16:21.389479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389503] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389516] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.736 [2024-07-24 19:16:21.389586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.736 [2024-07-24 19:16:21.389595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f909c0) on tqpair=0x1f30540 00:24:15.736 [2024-07-24 19:16:21.389621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389662] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389680] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389693] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389716] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:15.736 [2024-07-24 19:16:21.389731] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:15.736 [2024-07-24 19:16:21.389743] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:15.736 [2024-07-24 19:16:21.389768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f30540) 00:24:15.736 [2024-07-24 19:16:21.389795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.736 [2024-07-24 19:16:21.389810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.736 [2024-07-24 19:16:21.389828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f30540) 00:24:15.736 [2024-07-24 19:16:21.389840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.737 [2024-07-24 19:16:21.389876] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f909c0, cid 4, qid 0 00:24:15.737 [2024-07-24 19:16:21.389892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90b40, cid 5, qid 0 00:24:15.737 [2024-07-24 19:16:21.390070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.737 [2024-07-24 19:16:21.390087] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.737 [2024-07-24 19:16:21.390096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f909c0) on tqpair=0x1f30540 00:24:15.737 [2024-07-24 19:16:21.390118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.737 [2024-07-24 19:16:21.390131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.737 [2024-07-24 19:16:21.390139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90b40) on tqpair=0x1f30540 00:24:15.737 [2024-07-24 19:16:21.390169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f30540) 00:24:15.737 [2024-07-24 19:16:21.390195] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.737 [2024-07-24 19:16:21.390224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90b40, cid 5, qid 0 00:24:15.737 [2024-07-24 19:16:21.390370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.737 [2024-07-24 19:16:21.390386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.737 [2024-07-24 19:16:21.390395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90b40) on tqpair=0x1f30540 00:24:15.737 [2024-07-24 19:16:21.390425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390449] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f30540) 00:24:15.737 [2024-07-24 19:16:21.390464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.737 [2024-07-24 19:16:21.390493] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90b40, cid 5, qid 0 00:24:15.737 [2024-07-24 19:16:21.390667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.737 [2024-07-24 19:16:21.390684] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.737 [2024-07-24 19:16:21.390693] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90b40) on tqpair=0x1f30540 00:24:15.737 [2024-07-24 19:16:21.390723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f30540) 00:24:15.737 [2024-07-24 19:16:21.390754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.737 [2024-07-24 19:16:21.390784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90b40, cid 5, qid 0 00:24:15.737 [2024-07-24 19:16:21.390922] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.737 [2024-07-24 19:16:21.390942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.737 [2024-07-24 19:16:21.390952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.390961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90b40) on tqpair=0x1f30540 00:24:15.737 [2024-07-24 19:16:21.390993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f30540) 00:24:15.737 [2024-07-24 19:16:21.391022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.737 [2024-07-24 19:16:21.391040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f30540) 00:24:15.737 [2024-07-24 19:16:21.391062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.737 [2024-07-24 19:16:21.391078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f30540) 00:24:15.737 [2024-07-24 19:16:21.391101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.737 [2024-07-24 19:16:21.391117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f30540) 00:24:15.737 [2024-07-24 19:16:21.391140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.737 [2024-07-24 19:16:21.391171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90b40, cid 5, qid 0 00:24:15.737 [2024-07-24 19:16:21.391186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f909c0, cid 4, qid 0 00:24:15.737 [2024-07-24 19:16:21.391197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90cc0, cid 6, qid 0 00:24:15.737 [2024-07-24 19:16:21.391207] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90e40, cid 7, qid 0 00:24:15.737 [2024-07-24 19:16:21.391479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.737 [2024-07-24 19:16:21.391498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.737 [2024-07-24 19:16:21.391507] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391516] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=8192, cccid=5 00:24:15.737 [2024-07-24 19:16:21.391526] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f90b40) on tqpair(0x1f30540): expected_datao=0, payload_size=8192 00:24:15.737 [2024-07-24 19:16:21.391536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391583] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391597] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.737 [2024-07-24 19:16:21.391621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.737 [2024-07-24 19:16:21.391635] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391644] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=512, cccid=4 00:24:15.737 [2024-07-24 19:16:21.391655] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f909c0) on tqpair(0x1f30540): expected_datao=0, payload_size=512 00:24:15.737 [2024-07-24 19:16:21.391664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391677] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391687] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.737 [2024-07-24 19:16:21.391710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.737 [2024-07-24 19:16:21.391718] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391727] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=512, cccid=6 00:24:15.737 [2024-07-24 19:16:21.391737] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f90cc0) on tqpair(0x1f30540): expected_datao=0, payload_size=512 00:24:15.737 [2024-07-24 19:16:21.391746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391759] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391768] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391780] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:15.737 [2024-07-24 19:16:21.391791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:15.737 [2024-07-24 19:16:21.391800] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391808] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f30540): datao=0, datal=4096, cccid=7 00:24:15.737 [2024-07-24 19:16:21.391818] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f90e40) on tqpair(0x1f30540): expected_datao=0, payload_size=4096 00:24:15.737 [2024-07-24 19:16:21.391827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.737 [2024-07-24 19:16:21.391840] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:15.738 [2024-07-24 19:16:21.391850] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:15.738 [2024-07-24 19:16:21.391865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.738 [2024-07-24 19:16:21.391878] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.738 [2024-07-24 19:16:21.391887] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.738 [2024-07-24 19:16:21.391896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90b40) on tqpair=0x1f30540 00:24:15.738 [2024-07-24 19:16:21.391920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.738 [2024-07-24 19:16:21.391935] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.738 [2024-07-24 19:16:21.391943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.738 [2024-07-24 19:16:21.391952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f909c0) on tqpair=0x1f30540 00:24:15.738 [2024-07-24 19:16:21.391972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.738 [2024-07-24 19:16:21.391986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.738 [2024-07-24 19:16:21.391995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.738 [2024-07-24 19:16:21.392004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90cc0) on tqpair=0x1f30540 00:24:15.738 [2024-07-24 19:16:21.392018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.738 [2024-07-24 19:16:21.392031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.738 [2024-07-24 19:16:21.392039] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.738 [2024-07-24 19:16:21.392048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90e40) on tqpair=0x1f30540 00:24:15.738 ===================================================== 00:24:15.738 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.738 ===================================================== 00:24:15.738 Controller Capabilities/Features 00:24:15.738 ================================ 00:24:15.738 Vendor ID: 8086 00:24:15.738 Subsystem Vendor ID: 8086 00:24:15.738 Serial Number: SPDK00000000000001 00:24:15.738 Model Number: SPDK bdev Controller 00:24:15.738 Firmware Version: 24.09 00:24:15.738 Recommended Arb Burst: 6 00:24:15.738 IEEE OUI Identifier: e4 d2 5c 00:24:15.738 Multi-path I/O 00:24:15.738 May have multiple subsystem ports: Yes 00:24:15.738 May have multiple controllers: Yes 00:24:15.738 Associated with SR-IOV VF: No 00:24:15.738 Max Data Transfer Size: 131072 00:24:15.738 Max Number of Namespaces: 32 00:24:15.738 Max Number of I/O Queues: 127 00:24:15.738 NVMe Specification Version (VS): 1.3 00:24:15.738 NVMe Specification Version (Identify): 1.3 00:24:15.738 Maximum Queue Entries: 128 00:24:15.738 Contiguous Queues Required: Yes 00:24:15.738 Arbitration Mechanisms Supported 00:24:15.738 Weighted Round Robin: Not Supported 00:24:15.738 Vendor Specific: Not Supported 00:24:15.738 Reset Timeout: 15000 ms 00:24:15.738 Doorbell Stride: 4 bytes 00:24:15.738 NVM Subsystem Reset: Not Supported 00:24:15.738 Command Sets Supported 00:24:15.738 NVM Command Set: Supported 00:24:15.738 Boot Partition: Not Supported 00:24:15.738 Memory Page Size Minimum: 4096 bytes 00:24:15.738 Memory Page Size Maximum: 4096 bytes 00:24:15.738 Persistent Memory Region: Not Supported 00:24:15.738 Optional Asynchronous Events Supported 00:24:15.738 Namespace Attribute Notices: Supported 00:24:15.738 Firmware Activation Notices: Not Supported 00:24:15.738 ANA Change Notices: Not Supported 00:24:15.738 PLE Aggregate Log Change Notices: Not Supported 00:24:15.738 LBA Status Info Alert Notices: Not Supported 00:24:15.738 EGE Aggregate Log Change Notices: Not Supported 00:24:15.738 Normal NVM Subsystem Shutdown event: Not Supported 00:24:15.738 Zone Descriptor Change Notices: Not Supported 00:24:15.738 Discovery Log Change Notices: Not Supported 00:24:15.738 Controller Attributes 00:24:15.738 128-bit Host Identifier: Supported 00:24:15.738 Non-Operational Permissive Mode: Not Supported 00:24:15.738 NVM Sets: Not Supported 00:24:15.738 Read Recovery Levels: Not Supported 00:24:15.738 Endurance Groups: Not Supported 00:24:15.738 Predictable Latency Mode: Not Supported 00:24:15.738 Traffic Based Keep ALive: Not Supported 00:24:15.738 Namespace Granularity: Not Supported 00:24:15.738 SQ Associations: Not Supported 00:24:15.738 UUID List: Not Supported 00:24:15.738 Multi-Domain Subsystem: Not Supported 00:24:15.738 Fixed Capacity Management: Not Supported 00:24:15.738 Variable Capacity Management: Not Supported 00:24:15.738 Delete Endurance Group: Not Supported 00:24:15.738 Delete NVM Set: Not Supported 00:24:15.738 Extended LBA Formats Supported: Not Supported 00:24:15.738 Flexible Data Placement Supported: Not Supported 00:24:15.738 00:24:15.738 Controller Memory Buffer Support 00:24:15.738 ================================ 00:24:15.738 Supported: No 00:24:15.738 00:24:15.738 Persistent Memory Region Support 00:24:15.738 ================================ 00:24:15.738 Supported: No 00:24:15.738 00:24:15.738 Admin Command Set Attributes 00:24:15.738 ============================ 00:24:15.738 Security Send/Receive: Not Supported 00:24:15.738 Format NVM: Not Supported 00:24:15.738 Firmware Activate/Download: Not Supported 00:24:15.738 Namespace Management: Not Supported 00:24:15.738 Device Self-Test: Not Supported 00:24:15.738 Directives: Not Supported 00:24:15.738 NVMe-MI: Not Supported 00:24:15.738 Virtualization Management: Not Supported 00:24:15.738 Doorbell Buffer Config: Not Supported 00:24:15.738 Get LBA Status Capability: Not Supported 00:24:15.738 Command & Feature Lockdown Capability: Not Supported 00:24:15.738 Abort Command Limit: 4 00:24:15.738 Async Event Request Limit: 4 00:24:15.738 Number of Firmware Slots: N/A 00:24:15.738 Firmware Slot 1 Read-Only: N/A 00:24:15.738 Firmware Activation Without Reset: N/A 00:24:15.738 Multiple Update Detection Support: N/A 00:24:15.738 Firmware Update Granularity: No Information Provided 00:24:15.738 Per-Namespace SMART Log: No 00:24:15.738 Asymmetric Namespace Access Log Page: Not Supported 00:24:15.738 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:15.738 Command Effects Log Page: Supported 00:24:15.738 Get Log Page Extended Data: Supported 00:24:15.738 Telemetry Log Pages: Not Supported 00:24:15.738 Persistent Event Log Pages: Not Supported 00:24:15.738 Supported Log Pages Log Page: May Support 00:24:15.738 Commands Supported & Effects Log Page: Not Supported 00:24:15.738 Feature Identifiers & Effects Log Page:May Support 00:24:15.738 NVMe-MI Commands & Effects Log Page: May Support 00:24:15.738 Data Area 4 for Telemetry Log: Not Supported 00:24:15.738 Error Log Page Entries Supported: 128 00:24:15.738 Keep Alive: Supported 00:24:15.738 Keep Alive Granularity: 10000 ms 00:24:15.738 00:24:15.738 NVM Command Set Attributes 00:24:15.738 ========================== 00:24:15.738 Submission Queue Entry Size 00:24:15.738 Max: 64 00:24:15.738 Min: 64 00:24:15.738 Completion Queue Entry Size 00:24:15.738 Max: 16 00:24:15.738 Min: 16 00:24:15.738 Number of Namespaces: 32 00:24:15.738 Compare Command: Supported 00:24:15.738 Write Uncorrectable Command: Not Supported 00:24:15.738 Dataset Management Command: Supported 00:24:15.738 Write Zeroes Command: Supported 00:24:15.738 Set Features Save Field: Not Supported 00:24:15.738 Reservations: Supported 00:24:15.738 Timestamp: Not Supported 00:24:15.738 Copy: Supported 00:24:15.738 Volatile Write Cache: Present 00:24:15.738 Atomic Write Unit (Normal): 1 00:24:15.738 Atomic Write Unit (PFail): 1 00:24:15.738 Atomic Compare & Write Unit: 1 00:24:15.738 Fused Compare & Write: Supported 00:24:15.738 Scatter-Gather List 00:24:15.738 SGL Command Set: Supported 00:24:15.738 SGL Keyed: Supported 00:24:15.738 SGL Bit Bucket Descriptor: Not Supported 00:24:15.738 SGL Metadata Pointer: Not Supported 00:24:15.738 Oversized SGL: Not Supported 00:24:15.738 SGL Metadata Address: Not Supported 00:24:15.738 SGL Offset: Supported 00:24:15.738 Transport SGL Data Block: Not Supported 00:24:15.738 Replay Protected Memory Block: Not Supported 00:24:15.738 00:24:15.738 Firmware Slot Information 00:24:15.738 ========================= 00:24:15.738 Active slot: 1 00:24:15.738 Slot 1 Firmware Revision: 24.09 00:24:15.738 00:24:15.738 00:24:15.738 Commands Supported and Effects 00:24:15.738 ============================== 00:24:15.738 Admin Commands 00:24:15.738 -------------- 00:24:15.738 Get Log Page (02h): Supported 00:24:15.738 Identify (06h): Supported 00:24:15.738 Abort (08h): Supported 00:24:15.738 Set Features (09h): Supported 00:24:15.738 Get Features (0Ah): Supported 00:24:15.738 Asynchronous Event Request (0Ch): Supported 00:24:15.739 Keep Alive (18h): Supported 00:24:15.739 I/O Commands 00:24:15.739 ------------ 00:24:15.739 Flush (00h): Supported LBA-Change 00:24:15.739 Write (01h): Supported LBA-Change 00:24:15.739 Read (02h): Supported 00:24:15.739 Compare (05h): Supported 00:24:15.739 Write Zeroes (08h): Supported LBA-Change 00:24:15.739 Dataset Management (09h): Supported LBA-Change 00:24:15.739 Copy (19h): Supported LBA-Change 00:24:15.739 00:24:15.739 Error Log 00:24:15.739 ========= 00:24:15.739 00:24:15.739 Arbitration 00:24:15.739 =========== 00:24:15.739 Arbitration Burst: 1 00:24:15.739 00:24:15.739 Power Management 00:24:15.739 ================ 00:24:15.739 Number of Power States: 1 00:24:15.739 Current Power State: Power State #0 00:24:15.739 Power State #0: 00:24:15.739 Max Power: 0.00 W 00:24:15.739 Non-Operational State: Operational 00:24:15.739 Entry Latency: Not Reported 00:24:15.739 Exit Latency: Not Reported 00:24:15.739 Relative Read Throughput: 0 00:24:15.739 Relative Read Latency: 0 00:24:15.739 Relative Write Throughput: 0 00:24:15.739 Relative Write Latency: 0 00:24:15.739 Idle Power: Not Reported 00:24:15.739 Active Power: Not Reported 00:24:15.739 Non-Operational Permissive Mode: Not Supported 00:24:15.739 00:24:15.739 Health Information 00:24:15.739 ================== 00:24:15.739 Critical Warnings: 00:24:15.739 Available Spare Space: OK 00:24:15.739 Temperature: OK 00:24:15.739 Device Reliability: OK 00:24:15.739 Read Only: No 00:24:15.739 Volatile Memory Backup: OK 00:24:15.739 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:15.739 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:15.739 Available Spare: 0% 00:24:15.739 Available Spare Threshold: 0% 00:24:15.739 Life Percentage Used:[2024-07-24 19:16:21.392204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.392223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.392239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.392270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90e40, cid 7, qid 0 00:24:15.739 [2024-07-24 19:16:21.396444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.739 [2024-07-24 19:16:21.396468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.739 [2024-07-24 19:16:21.396478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.396487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90e40) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.396548] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:15.739 [2024-07-24 19:16:21.396576] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f903c0) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.396590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.739 [2024-07-24 19:16:21.396602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90540) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.396612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.739 [2024-07-24 19:16:21.396623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f906c0) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.396633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.739 [2024-07-24 19:16:21.396644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.396654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.739 [2024-07-24 19:16:21.396671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.396681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.396690] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.396704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.396736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.739 [2024-07-24 19:16:21.396886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.739 [2024-07-24 19:16:21.396903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.739 [2024-07-24 19:16:21.396913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.396922] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.396937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.396947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.396955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.396970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.397005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.739 [2024-07-24 19:16:21.397180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.739 [2024-07-24 19:16:21.397196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.739 [2024-07-24 19:16:21.397205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.397229] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:15.739 [2024-07-24 19:16:21.397240] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:15.739 [2024-07-24 19:16:21.397262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.397296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.397324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.739 [2024-07-24 19:16:21.397474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.739 [2024-07-24 19:16:21.397492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.739 [2024-07-24 19:16:21.397501] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.397532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.397567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.397597] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.739 [2024-07-24 19:16:21.397731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.739 [2024-07-24 19:16:21.397752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.739 [2024-07-24 19:16:21.397761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397770] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.397793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.397814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.397829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.397858] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.739 [2024-07-24 19:16:21.398007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.739 [2024-07-24 19:16:21.398028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.739 [2024-07-24 19:16:21.398037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.398046] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.398068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.398080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.398089] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.398103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.398132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.739 [2024-07-24 19:16:21.398273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.739 [2024-07-24 19:16:21.398294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.739 [2024-07-24 19:16:21.398308] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.398318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.739 [2024-07-24 19:16:21.398340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.398353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.739 [2024-07-24 19:16:21.398362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.739 [2024-07-24 19:16:21.398376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.739 [2024-07-24 19:16:21.398406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.739 [2024-07-24 19:16:21.398574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.398593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.398602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.398611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.398633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.398645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.398654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.398668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.398697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.398842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.398863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.398872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.398881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.398903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.398916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.398924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.398939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.398967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.399104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.399125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.399134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.399165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.399200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.399229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.399376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.399396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.399406] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.399452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.399490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.399520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.399658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.399679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.399689] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.399720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399732] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.399755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.399784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.399916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.399937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.399946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.399977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.399998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.400012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.400041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.400189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.400210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.400219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.400228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.400250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.400263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.400271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.400285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.400314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.404446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.404481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.404492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.404502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.404533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.404547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.404556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f30540) 00:24:15.740 [2024-07-24 19:16:21.404570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.740 [2024-07-24 19:16:21.404602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f90840, cid 3, qid 0 00:24:15.740 [2024-07-24 19:16:21.404758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:15.740 [2024-07-24 19:16:21.404779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:15.740 [2024-07-24 19:16:21.404789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:15.740 [2024-07-24 19:16:21.404797] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f90840) on tqpair=0x1f30540 00:24:15.740 [2024-07-24 19:16:21.404815] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:15.740 0% 00:24:15.740 Data Units Read: 0 00:24:15.740 Data Units Written: 0 00:24:15.740 Host Read Commands: 0 00:24:15.740 Host Write Commands: 0 00:24:15.740 Controller Busy Time: 0 minutes 00:24:15.740 Power Cycles: 0 00:24:15.740 Power On Hours: 0 hours 00:24:15.740 Unsafe Shutdowns: 0 00:24:15.740 Unrecoverable Media Errors: 0 00:24:15.740 Lifetime Error Log Entries: 0 00:24:15.740 Warning Temperature Time: 0 minutes 00:24:15.740 Critical Temperature Time: 0 minutes 00:24:15.740 00:24:15.740 Number of Queues 00:24:15.740 ================ 00:24:15.740 Number of I/O Submission Queues: 127 00:24:15.740 Number of I/O Completion Queues: 127 00:24:15.740 00:24:15.740 Active Namespaces 00:24:15.740 ================= 00:24:15.740 Namespace ID:1 00:24:15.740 Error Recovery Timeout: Unlimited 00:24:15.740 Command Set Identifier: NVM (00h) 00:24:15.740 Deallocate: Supported 00:24:15.740 Deallocated/Unwritten Error: Not Supported 00:24:15.740 Deallocated Read Value: Unknown 00:24:15.740 Deallocate in Write Zeroes: Not Supported 00:24:15.740 Deallocated Guard Field: 0xFFFF 00:24:15.740 Flush: Supported 00:24:15.740 Reservation: Supported 00:24:15.740 Namespace Sharing Capabilities: Multiple Controllers 00:24:15.740 Size (in LBAs): 131072 (0GiB) 00:24:15.740 Capacity (in LBAs): 131072 (0GiB) 00:24:15.740 Utilization (in LBAs): 131072 (0GiB) 00:24:15.740 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:15.740 EUI64: ABCDEF0123456789 00:24:15.740 UUID: 2def0275-5389-4387-aed5-0e5bc4ae04c8 00:24:15.740 Thin Provisioning: Not Supported 00:24:15.740 Per-NS Atomic Units: Yes 00:24:15.740 Atomic Boundary Size (Normal): 0 00:24:15.740 Atomic Boundary Size (PFail): 0 00:24:15.740 Atomic Boundary Offset: 0 00:24:15.740 Maximum Single Source Range Length: 65535 00:24:15.740 Maximum Copy Length: 65535 00:24:15.740 Maximum Source Range Count: 1 00:24:15.740 NGUID/EUI64 Never Reused: No 00:24:15.740 Namespace Write Protected: No 00:24:15.740 Number of LBA Formats: 1 00:24:15.740 Current LBA Format: LBA Format #00 00:24:15.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:15.741 00:24:15.998 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:15.998 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.998 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.998 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:15.998 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.998 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:15.999 rmmod nvme_tcp 00:24:15.999 rmmod nvme_fabrics 00:24:15.999 rmmod nvme_keyring 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1719060 ']' 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1719060 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1719060 ']' 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1719060 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1719060 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1719060' 00:24:15.999 killing process with pid 1719060 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1719060 00:24:15.999 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1719060 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.258 19:16:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.790 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.790 00:24:18.790 real 0m7.333s 00:24:18.790 user 0m8.135s 00:24:18.790 sys 0m2.813s 00:24:18.790 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:18.790 19:16:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:18.790 ************************************ 00:24:18.790 END TEST nvmf_identify 00:24:18.790 ************************************ 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.790 ************************************ 00:24:18.790 START TEST nvmf_perf 00:24:18.790 ************************************ 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:18.790 * Looking for test storage... 00:24:18.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.790 19:16:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.326 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:21.327 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:21.327 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:21.327 Found net devices under 0000:84:00.0: cvl_0_0 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:21.327 Found net devices under 0000:84:00.1: cvl_0_1 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.327 19:16:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.327 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.586 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.586 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.586 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:24:21.586 00:24:21.586 --- 10.0.0.2 ping statistics --- 00:24:21.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.587 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:24:21.587 00:24:21.587 --- 10.0.0.1 ping statistics --- 00:24:21.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.587 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1721300 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1721300 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1721300 ']' 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.587 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.587 [2024-07-24 19:16:27.165742] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:24:21.587 [2024-07-24 19:16:27.165843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.587 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.587 [2024-07-24 19:16:27.257766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.846 [2024-07-24 19:16:27.399581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.846 [2024-07-24 19:16:27.399647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.846 [2024-07-24 19:16:27.399680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.846 [2024-07-24 19:16:27.399706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.846 [2024-07-24 19:16:27.399728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.846 [2024-07-24 19:16:27.399810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.846 [2024-07-24 19:16:27.399853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.846 [2024-07-24 19:16:27.399917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.846 [2024-07-24 19:16:27.399928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.103 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.103 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:22.103 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:22.104 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.104 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:22.104 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.104 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:22.104 19:16:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:25.388 19:16:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:25.388 19:16:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:25.646 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:24:25.646 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:25.905 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:25.905 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:24:25.905 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:25.905 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:25.905 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:26.163 [2024-07-24 19:16:31.804351] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.163 19:16:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.730 19:16:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:26.730 19:16:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.987 19:16:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:26.987 19:16:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:27.245 19:16:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.817 [2024-07-24 19:16:33.428384] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.817 19:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:28.383 19:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:24:28.384 19:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:24:28.384 19:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:28.384 19:16:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:24:29.758 Initializing NVMe Controllers 00:24:29.758 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:24:29.758 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:24:29.758 Initialization complete. Launching workers. 00:24:29.758 ======================================================== 00:24:29.758 Latency(us) 00:24:29.758 Device Information : IOPS MiB/s Average min max 00:24:29.758 PCIE (0000:82:00.0) NSID 1 from core 0: 62162.48 242.82 514.24 52.66 6324.89 00:24:29.758 ======================================================== 00:24:29.758 Total : 62162.48 242.82 514.24 52.66 6324.89 00:24:29.758 00:24:29.758 19:16:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.758 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.131 Initializing NVMe Controllers 00:24:31.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:31.132 Initialization complete. Launching workers. 00:24:31.132 ======================================================== 00:24:31.132 Latency(us) 00:24:31.132 Device Information : IOPS MiB/s Average min max 00:24:31.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 72.00 0.28 14325.20 215.21 45168.73 00:24:31.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24572.64 7963.44 47910.62 00:24:31.132 ======================================================== 00:24:31.132 Total : 113.00 0.44 18043.30 215.21 47910.62 00:24:31.132 00:24:31.132 19:16:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.132 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.504 Initializing NVMe Controllers 00:24:32.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:32.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:32.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:32.504 Initialization complete. Launching workers. 00:24:32.504 ======================================================== 00:24:32.504 Latency(us) 00:24:32.504 Device Information : IOPS MiB/s Average min max 00:24:32.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6110.53 23.87 5237.57 780.81 10488.74 00:24:32.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3794.43 14.82 8447.90 4827.17 21848.74 00:24:32.504 ======================================================== 00:24:32.504 Total : 9904.96 38.69 6467.40 780.81 21848.74 00:24:32.504 00:24:32.504 19:16:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:32.504 19:16:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:32.504 19:16:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:32.504 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.032 Initializing NVMe Controllers 00:24:35.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.032 Controller IO queue size 128, less than required. 00:24:35.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.032 Controller IO queue size 128, less than required. 00:24:35.032 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:35.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:35.032 Initialization complete. Launching workers. 00:24:35.032 ======================================================== 00:24:35.032 Latency(us) 00:24:35.032 Device Information : IOPS MiB/s Average min max 00:24:35.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1109.99 277.50 118698.06 92725.04 176420.13 00:24:35.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.00 143.50 232694.30 126734.88 359313.03 00:24:35.032 ======================================================== 00:24:35.032 Total : 1683.99 421.00 157554.26 92725.04 359313.03 00:24:35.032 00:24:35.033 19:16:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:35.033 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.033 No valid NVMe controllers or AIO or URING devices found 00:24:35.033 Initializing NVMe Controllers 00:24:35.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:35.033 Controller IO queue size 128, less than required. 00:24:35.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.033 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:35.033 Controller IO queue size 128, less than required. 00:24:35.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:35.033 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:35.033 WARNING: Some requested NVMe devices were skipped 00:24:35.033 19:16:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:35.033 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.561 Initializing NVMe Controllers 00:24:37.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.561 Controller IO queue size 128, less than required. 00:24:37.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.561 Controller IO queue size 128, less than required. 00:24:37.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:37.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:37.561 Initialization complete. Launching workers. 00:24:37.561 00:24:37.561 ==================== 00:24:37.561 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:37.561 TCP transport: 00:24:37.561 polls: 6632 00:24:37.561 idle_polls: 4204 00:24:37.561 sock_completions: 2428 00:24:37.561 nvme_completions: 4423 00:24:37.561 submitted_requests: 6602 00:24:37.561 queued_requests: 1 00:24:37.561 00:24:37.561 ==================== 00:24:37.561 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:37.561 TCP transport: 00:24:37.561 polls: 8997 00:24:37.561 idle_polls: 5910 00:24:37.561 sock_completions: 3087 00:24:37.561 nvme_completions: 4635 00:24:37.561 submitted_requests: 6886 00:24:37.561 queued_requests: 1 00:24:37.561 ======================================================== 00:24:37.561 Latency(us) 00:24:37.561 Device Information : IOPS MiB/s Average min max 00:24:37.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1103.76 275.94 118267.47 67153.11 202637.13 00:24:37.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1156.68 289.17 113010.14 54489.16 161341.15 00:24:37.561 ======================================================== 00:24:37.561 Total : 2260.44 565.11 115577.27 54489.16 202637.13 00:24:37.561 00:24:37.561 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:37.561 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.128 rmmod nvme_tcp 00:24:38.128 rmmod nvme_fabrics 00:24:38.128 rmmod nvme_keyring 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1721300 ']' 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1721300 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1721300 ']' 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1721300 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1721300 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1721300' 00:24:38.128 killing process with pid 1721300 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1721300 00:24:38.128 19:16:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1721300 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.029 19:16:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.936 00:24:41.936 real 0m23.480s 00:24:41.936 user 1m12.159s 00:24:41.936 sys 0m6.273s 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:41.936 ************************************ 00:24:41.936 END TEST nvmf_perf 00:24:41.936 ************************************ 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.936 ************************************ 00:24:41.936 START TEST nvmf_fio_host 00:24:41.936 ************************************ 00:24:41.936 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:42.196 * Looking for test storage... 00:24:42.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:42.196 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.197 19:16:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:44.764 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:44.764 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:44.764 Found net devices under 0000:84:00.0: cvl_0_0 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:44.764 Found net devices under 0000:84:00.1: cvl_0_1 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.764 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:24:45.024 00:24:45.024 --- 10.0.0.2 ping statistics --- 00:24:45.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.024 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:24:45.024 00:24:45.024 --- 10.0.0.1 ping statistics --- 00:24:45.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.024 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1725410 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1725410 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1725410 ']' 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.024 19:16:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.024 [2024-07-24 19:16:50.643725] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:24:45.024 [2024-07-24 19:16:50.643904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.284 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.284 [2024-07-24 19:16:50.791885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.542 [2024-07-24 19:16:50.995776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.542 [2024-07-24 19:16:50.995883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.542 [2024-07-24 19:16:50.995938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.542 [2024-07-24 19:16:50.995988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.542 [2024-07-24 19:16:50.996028] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.542 [2024-07-24 19:16:50.996213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.542 [2024-07-24 19:16:50.996279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.542 [2024-07-24 19:16:50.996367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.542 [2024-07-24 19:16:50.996389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.108 19:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:46.108 19:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:46.108 19:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:46.366 [2024-07-24 19:16:51.958498] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.366 19:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:46.366 19:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:46.366 19:16:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.366 19:16:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:46.933 Malloc1 00:24:46.933 19:16:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.498 19:16:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:47.756 19:16:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:48.014 [2024-07-24 19:16:53.594457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.014 19:16:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:48.579 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:48.580 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:48.838 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:48.838 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:48.838 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:48.838 19:16:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:48.838 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:48.838 fio-3.35 00:24:48.838 Starting 1 thread 00:24:49.096 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.627 00:24:51.627 test: (groupid=0, jobs=1): err= 0: pid=1726025: Wed Jul 24 19:16:56 2024 00:24:51.627 read: IOPS=6611, BW=25.8MiB/s (27.1MB/s)(51.9MiB/2008msec) 00:24:51.627 slat (usec): min=2, max=143, avg= 3.57, stdev= 2.04 00:24:51.627 clat (usec): min=3074, max=18350, avg=10567.80, stdev=891.72 00:24:51.627 lat (usec): min=3105, max=18353, avg=10571.37, stdev=891.58 00:24:51.627 clat percentiles (usec): 00:24:51.627 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:24:51.627 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:24:51.627 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:24:51.627 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16712], 99.95th=[17957], 00:24:51.627 | 99.99th=[18220] 00:24:51.627 bw ( KiB/s): min=25216, max=27008, per=99.86%, avg=26406.00, stdev=837.40, samples=4 00:24:51.627 iops : min= 6304, max= 6752, avg=6601.50, stdev=209.35, samples=4 00:24:51.627 write: IOPS=6618, BW=25.9MiB/s (27.1MB/s)(51.9MiB/2008msec); 0 zone resets 00:24:51.627 slat (usec): min=2, max=127, avg= 3.66, stdev= 1.54 00:24:51.627 clat (usec): min=1372, max=16896, avg=8649.11, stdev=738.63 00:24:51.627 lat (usec): min=1381, max=16899, avg=8652.77, stdev=738.57 00:24:51.627 clat percentiles (usec): 00:24:51.627 | 1.00th=[ 6980], 5.00th=[ 7570], 10.00th=[ 7832], 20.00th=[ 8094], 00:24:51.627 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:24:51.627 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[ 9634], 00:24:51.627 | 99.00th=[10159], 99.50th=[10421], 99.90th=[14353], 99.95th=[14877], 00:24:51.628 | 99.99th=[16909] 00:24:51.628 bw ( KiB/s): min=26256, max=26688, per=99.98%, avg=26468.00, stdev=219.53, samples=4 00:24:51.628 iops : min= 6564, max= 6672, avg=6617.00, stdev=54.88, samples=4 00:24:51.628 lat (msec) : 2=0.01%, 4=0.08%, 10=61.00%, 20=38.91% 00:24:51.628 cpu : usr=67.02%, sys=30.14%, ctx=65, majf=0, minf=39 00:24:51.628 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:51.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:51.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:51.628 issued rwts: total=13275,13289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:51.628 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:51.628 00:24:51.628 Run status group 0 (all jobs): 00:24:51.628 READ: bw=25.8MiB/s (27.1MB/s), 25.8MiB/s-25.8MiB/s (27.1MB/s-27.1MB/s), io=51.9MiB (54.4MB), run=2008-2008msec 00:24:51.628 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=51.9MiB (54.4MB), run=2008-2008msec 00:24:51.628 19:16:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:51.628 19:16:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:51.628 19:16:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:51.628 19:16:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:51.628 19:16:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:51.628 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:51.628 fio-3.35 00:24:51.628 Starting 1 thread 00:24:51.628 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.158 00:24:54.158 test: (groupid=0, jobs=1): err= 0: pid=1726353: Wed Jul 24 19:16:59 2024 00:24:54.158 read: IOPS=6348, BW=99.2MiB/s (104MB/s)(199MiB/2009msec) 00:24:54.158 slat (usec): min=3, max=136, avg= 5.20, stdev= 2.08 00:24:54.158 clat (usec): min=2634, max=23783, avg=11785.14, stdev=2820.26 00:24:54.158 lat (usec): min=2639, max=23788, avg=11790.35, stdev=2820.26 00:24:54.158 clat percentiles (usec): 00:24:54.158 | 1.00th=[ 6063], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 9503], 00:24:54.158 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11600], 60.00th=[12256], 00:24:54.158 | 70.00th=[12911], 80.00th=[14091], 90.00th=[15664], 95.00th=[16581], 00:24:54.158 | 99.00th=[19006], 99.50th=[20055], 99.90th=[22414], 99.95th=[22676], 00:24:54.158 | 99.99th=[22938] 00:24:54.158 bw ( KiB/s): min=41920, max=58176, per=50.04%, avg=50832.00, stdev=7189.52, samples=4 00:24:54.159 iops : min= 2620, max= 3636, avg=3177.00, stdev=449.34, samples=4 00:24:54.159 write: IOPS=3678, BW=57.5MiB/s (60.3MB/s)(104MiB/1811msec); 0 zone resets 00:24:54.159 slat (usec): min=40, max=190, avg=47.98, stdev= 5.27 00:24:54.159 clat (usec): min=7493, max=28099, avg=15176.36, stdev=2652.23 00:24:54.159 lat (usec): min=7541, max=28146, avg=15224.34, stdev=2652.07 00:24:54.159 clat percentiles (usec): 00:24:54.159 | 1.00th=[10552], 5.00th=[11469], 10.00th=[12125], 20.00th=[12911], 00:24:54.159 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14877], 60.00th=[15533], 00:24:54.159 | 70.00th=[16319], 80.00th=[17433], 90.00th=[18744], 95.00th=[20055], 00:24:54.159 | 99.00th=[21890], 99.50th=[22676], 99.90th=[27395], 99.95th=[27919], 00:24:54.159 | 99.99th=[28181] 00:24:54.159 bw ( KiB/s): min=44224, max=59936, per=89.68%, avg=52776.00, stdev=7032.84, samples=4 00:24:54.159 iops : min= 2764, max= 3746, avg=3298.50, stdev=439.55, samples=4 00:24:54.159 lat (msec) : 4=0.17%, 10=18.17%, 20=79.53%, 50=2.13% 00:24:54.159 cpu : usr=80.58%, sys=16.98%, ctx=92, majf=0, minf=69 00:24:54.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:24:54.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:54.159 issued rwts: total=12754,6661,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:54.159 00:24:54.159 Run status group 0 (all jobs): 00:24:54.159 READ: bw=99.2MiB/s (104MB/s), 99.2MiB/s-99.2MiB/s (104MB/s-104MB/s), io=199MiB (209MB), run=2009-2009msec 00:24:54.159 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=104MiB (109MB), run=1811-1811msec 00:24:54.159 19:16:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.417 rmmod nvme_tcp 00:24:54.417 rmmod nvme_fabrics 00:24:54.417 rmmod nvme_keyring 00:24:54.417 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1725410 ']' 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1725410 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1725410 ']' 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1725410 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1725410 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1725410' 00:24:54.677 killing process with pid 1725410 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1725410 00:24:54.677 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1725410 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.936 19:17:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:57.468 00:24:57.468 real 0m15.077s 00:24:57.468 user 0m45.669s 00:24:57.468 sys 0m4.721s 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.468 ************************************ 00:24:57.468 END TEST nvmf_fio_host 00:24:57.468 ************************************ 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.468 ************************************ 00:24:57.468 START TEST nvmf_failover 00:24:57.468 ************************************ 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:57.468 * Looking for test storage... 00:24:57.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.468 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.469 19:17:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:00.003 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:00.003 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:00.003 Found net devices under 0000:84:00.0: cvl_0_0 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:00.003 Found net devices under 0000:84:00.1: cvl_0_1 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.003 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:25:00.004 00:25:00.004 --- 10.0.0.2 ping statistics --- 00:25:00.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.004 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:25:00.004 00:25:00.004 --- 10.0.0.1 ping statistics --- 00:25:00.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.004 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.004 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1728692 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1728692 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1728692 ']' 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.262 19:17:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:00.263 [2024-07-24 19:17:05.793270] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:25:00.263 [2024-07-24 19:17:05.793416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.263 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.263 [2024-07-24 19:17:05.878842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:00.540 [2024-07-24 19:17:06.022729] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.540 [2024-07-24 19:17:06.022798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.540 [2024-07-24 19:17:06.022818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.540 [2024-07-24 19:17:06.022835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.540 [2024-07-24 19:17:06.022849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.540 [2024-07-24 19:17:06.022942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.540 [2024-07-24 19:17:06.023038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.540 [2024-07-24 19:17:06.023047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.484 19:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.484 19:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:01.484 19:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.484 19:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.484 19:17:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.484 19:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.484 19:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:01.742 [2024-07-24 19:17:07.341368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.742 19:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:02.001 Malloc0 00:25:02.259 19:17:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.517 19:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.775 19:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.342 [2024-07-24 19:17:08.896103] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.342 19:17:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:03.908 [2024-07-24 19:17:09.522169] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.908 19:17:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:04.474 [2024-07-24 19:17:10.028102] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1729245 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1729245 /var/tmp/bdevperf.sock 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1729245 ']' 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.474 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:05.040 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:05.040 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:05.040 19:17:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.604 NVMe0n1 00:25:05.604 19:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.170 00:25:06.170 19:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1729378 00:25:06.170 19:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.170 19:17:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:07.105 19:17:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.364 [2024-07-24 19:17:13.051679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.051995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.364 [2024-07-24 19:17:13.052762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x975420 is same with the state(5) to be set 00:25:07.623 19:17:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:10.905 19:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.905 00:25:10.905 19:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:11.163 19:17:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:14.448 19:17:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.707 [2024-07-24 19:17:20.184497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.707 19:17:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:15.642 19:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.209 19:17:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1729378 00:25:21.517 0 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1729245 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1729245 ']' 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1729245 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1729245 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1729245' 00:25:21.517 killing process with pid 1729245 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1729245 00:25:21.517 19:17:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1729245 00:25:21.784 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.784 [2024-07-24 19:17:10.099655] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:25:21.784 [2024-07-24 19:17:10.099755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729245 ] 00:25:21.784 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.784 [2024-07-24 19:17:10.176285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.784 [2024-07-24 19:17:10.319775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.784 Running I/O for 15 seconds... 00:25:21.784 [2024-07-24 19:17:13.055142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:62848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.055972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.055994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.056014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.056036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.784 [2024-07-24 19:17:13.056058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.784 [2024-07-24 19:17:13.056080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.785 [2024-07-24 19:17:13.056444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.056973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.056994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.785 [2024-07-24 19:17:13.057952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.785 [2024-07-24 19:17:13.057971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.057992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.058979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.058999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.786 [2024-07-24 19:17:13.059600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.786 [2024-07-24 19:17:13.059792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.786 [2024-07-24 19:17:13.059813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.059833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.059860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.059880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.059918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.059937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.059958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.059976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.059997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.787 [2024-07-24 19:17:13.060823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:13.060881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:13.060923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:13.060964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.060986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:13.061011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:13.061053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:13.061094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.787 [2024-07-24 19:17:13.061156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.787 [2024-07-24 19:17:13.061173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63096 len:8 PRP1 0x0 PRP2 0x0 00:25:21.787 [2024-07-24 19:17:13.061191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061266] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a07ba0 was disconnected and freed. reset controller. 00:25:21.787 [2024-07-24 19:17:13.061292] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:21.787 [2024-07-24 19:17:13.061336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.787 [2024-07-24 19:17:13.061362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.787 [2024-07-24 19:17:13.061401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.787 [2024-07-24 19:17:13.061453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.787 [2024-07-24 19:17:13.061492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:13.061521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.787 [2024-07-24 19:17:13.061602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e1790 (9): Bad file descriptor 00:25:21.787 [2024-07-24 19:17:13.066106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.787 [2024-07-24 19:17:13.109544] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:21.787 [2024-07-24 19:17:16.790997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:16.791074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:16.791112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.787 [2024-07-24 19:17:16.791136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.787 [2024-07-24 19:17:16.791173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.791962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.791985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.788 [2024-07-24 19:17:16.792630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.788 [2024-07-24 19:17:16.792652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.792704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.792746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.792788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.792830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.792877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.792919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.792960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.792980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.789 [2024-07-24 19:17:16.793353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.789 [2024-07-24 19:17:16.793400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.793962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.793984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.794005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.794025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.794046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.789 [2024-07-24 19:17:16.794067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.794091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.789 [2024-07-24 19:17:16.794112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.794133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.789 [2024-07-24 19:17:16.794160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.789 [2024-07-24 19:17:16.794183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.794967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.794987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.790 [2024-07-24 19:17:16.795879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.790 [2024-07-24 19:17:16.795900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.795921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:16.795941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:16.795982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:16.796023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:16.796065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:16.796106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:16.796154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130600 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130608 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130616 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130624 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130632 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130640 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130648 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130656 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130664 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130672 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.796932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.796948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130680 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.796968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.796987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.791 [2024-07-24 19:17:16.797003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.791 [2024-07-24 19:17:16.797020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130688 len:8 PRP1 0x0 PRP2 0x0 00:25:21.791 [2024-07-24 19:17:16.797038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.797111] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a07d80 was disconnected and freed. reset controller. 00:25:21.791 [2024-07-24 19:17:16.797137] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:21.791 [2024-07-24 19:17:16.797183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.791 [2024-07-24 19:17:16.797209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.797237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.791 [2024-07-24 19:17:16.797255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.797275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.791 [2024-07-24 19:17:16.797294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.797314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.791 [2024-07-24 19:17:16.797332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:16.797351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.791 [2024-07-24 19:17:16.801813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.791 [2024-07-24 19:17:16.801868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e1790 (9): Bad file descriptor 00:25:21.791 [2024-07-24 19:17:16.925657] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:21.791 [2024-07-24 19:17:21.647956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:21.648049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:21.648088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:21.648112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:21.648136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:21.648158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:21.648180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:21.648201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:21.648223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:21.648245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:21.648267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:21.648288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.791 [2024-07-24 19:17:21.648312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.791 [2024-07-24 19:17:21.648334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.648375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.648420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.648475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.648520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.648581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.648627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.648677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.648719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.648761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.648803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.648846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.648888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.648931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.648973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.648995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.792 [2024-07-24 19:17:21.649364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.649964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.649984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.650006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.650027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.650049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.792 [2024-07-24 19:17:21.650069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.792 [2024-07-24 19:17:21.650091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.793 [2024-07-24 19:17:21.650913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.793 [2024-07-24 19:17:21.650934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.650955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.650976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.650997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.651967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.651988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.794 [2024-07-24 19:17:21.652504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.794 [2024-07-24 19:17:21.652525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.795 [2024-07-24 19:17:21.652855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.795 [2024-07-24 19:17:21.652896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.652965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.652985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.795 [2024-07-24 19:17:21.653552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.795 [2024-07-24 19:17:21.653632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.795 [2024-07-24 19:17:21.653649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7464 len:8 PRP1 0x0 PRP2 0x0 00:25:21.795 [2024-07-24 19:17:21.653667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653756] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a07d80 was disconnected and freed. reset controller. 00:25:21.795 [2024-07-24 19:17:21.653782] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:21.795 [2024-07-24 19:17:21.653828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.795 [2024-07-24 19:17:21.653855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.795 [2024-07-24 19:17:21.653893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.795 [2024-07-24 19:17:21.653930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.795 [2024-07-24 19:17:21.653967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.795 [2024-07-24 19:17:21.653984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.795 [2024-07-24 19:17:21.654052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e1790 (9): Bad file descriptor 00:25:21.795 [2024-07-24 19:17:21.658502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.795 [2024-07-24 19:17:21.828239] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:21.795 00:25:21.795 Latency(us) 00:25:21.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.795 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:21.795 Verification LBA range: start 0x0 length 0x4000 00:25:21.795 NVMe0n1 : 15.02 6295.85 24.59 635.30 0.00 18430.82 1116.54 20194.80 00:25:21.795 =================================================================================================================== 00:25:21.795 Total : 6295.85 24.59 635.30 0.00 18430.82 1116.54 20194.80 00:25:21.795 Received shutdown signal, test time was about 15.000000 seconds 00:25:21.795 00:25:21.795 Latency(us) 00:25:21.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.795 =================================================================================================================== 00:25:21.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1731222 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1731222 /var/tmp/bdevperf.sock 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1731222 ']' 00:25:21.795 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.796 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.796 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.796 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.796 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:22.363 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:22.363 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:22.363 19:17:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.621 [2024-07-24 19:17:28.267309] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.621 19:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:23.187 [2024-07-24 19:17:28.789094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:23.187 19:17:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.753 NVMe0n1 00:25:23.753 19:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:24.319 00:25:24.319 19:17:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.254 00:25:25.254 19:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.254 19:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:25.512 19:17:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.771 19:17:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:29.056 19:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:29.056 19:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:29.314 19:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1732021 00:25:29.314 19:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:29.314 19:17:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1732021 00:25:30.690 0 00:25:30.690 19:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:30.690 [2024-07-24 19:17:27.327843] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:25:30.690 [2024-07-24 19:17:27.327958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731222 ] 00:25:30.690 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.690 [2024-07-24 19:17:27.409595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.690 [2024-07-24 19:17:27.551241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.690 [2024-07-24 19:17:31.308863] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:30.690 [2024-07-24 19:17:31.308946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.690 [2024-07-24 19:17:31.308976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.690 [2024-07-24 19:17:31.308998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.690 [2024-07-24 19:17:31.309017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.690 [2024-07-24 19:17:31.309037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.690 [2024-07-24 19:17:31.309056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.690 [2024-07-24 19:17:31.309075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:30.690 [2024-07-24 19:17:31.309092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:30.690 [2024-07-24 19:17:31.309112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.690 [2024-07-24 19:17:31.309179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.690 [2024-07-24 19:17:31.309221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x659790 (9): Bad file descriptor 00:25:30.690 [2024-07-24 19:17:31.359259] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:30.690 Running I/O for 1 seconds... 00:25:30.690 00:25:30.690 Latency(us) 00:25:30.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.690 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:30.690 Verification LBA range: start 0x0 length 0x4000 00:25:30.690 NVMe0n1 : 1.00 6605.63 25.80 0.00 0.00 19281.89 1201.49 17864.63 00:25:30.690 =================================================================================================================== 00:25:30.690 Total : 6605.63 25.80 0.00 0.00 19281.89 1201.49 17864.63 00:25:30.690 19:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:30.690 19:17:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:30.690 19:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:31.256 19:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:31.256 19:17:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:31.514 19:17:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:32.080 19:17:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1731222 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1731222 ']' 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1731222 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1731222 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1731222' 00:25:35.365 killing process with pid 1731222 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1731222 00:25:35.365 19:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1731222 00:25:35.627 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:35.627 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.230 rmmod nvme_tcp 00:25:36.230 rmmod nvme_fabrics 00:25:36.230 rmmod nvme_keyring 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1728692 ']' 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1728692 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1728692 ']' 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1728692 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1728692 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1728692' 00:25:36.230 killing process with pid 1728692 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1728692 00:25:36.230 19:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1728692 00:25:36.489 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:36.489 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:36.489 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:36.489 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.489 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.490 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.490 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.490 19:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:39.019 00:25:39.019 real 0m41.378s 00:25:39.019 user 2m27.853s 00:25:39.019 sys 0m7.253s 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.019 ************************************ 00:25:39.019 END TEST nvmf_failover 00:25:39.019 ************************************ 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.019 ************************************ 00:25:39.019 START TEST nvmf_host_discovery 00:25:39.019 ************************************ 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:39.019 * Looking for test storage... 00:25:39.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:39.019 19:17:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:41.552 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:41.552 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.552 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:41.553 Found net devices under 0000:84:00.0: cvl_0_0 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:41.553 Found net devices under 0000:84:00.1: cvl_0_1 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.553 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:25:41.812 00:25:41.812 --- 10.0.0.2 ping statistics --- 00:25:41.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.812 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:25:41.812 00:25:41.812 --- 10.0.0.1 ping statistics --- 00:25:41.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.812 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1734894 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1734894 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1734894 ']' 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:41.812 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.812 [2024-07-24 19:17:47.464887] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:25:41.812 [2024-07-24 19:17:47.464995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.070 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.070 [2024-07-24 19:17:47.556901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.070 [2024-07-24 19:17:47.698200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.070 [2024-07-24 19:17:47.698275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.070 [2024-07-24 19:17:47.698294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.070 [2024-07-24 19:17:47.698310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.070 [2024-07-24 19:17:47.698324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.070 [2024-07-24 19:17:47.698362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.329 [2024-07-24 19:17:47.936180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.329 [2024-07-24 19:17:47.944410] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.329 null0 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.329 null1 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1734936 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1734936 /tmp/host.sock 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1734936 ']' 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:42.329 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:42.329 19:17:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.588 [2024-07-24 19:17:48.050763] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:25:42.588 [2024-07-24 19:17:48.050934] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734936 ] 00:25:42.588 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.588 [2024-07-24 19:17:48.134507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.588 [2024-07-24 19:17:48.274848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:42.846 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.847 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.105 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 [2024-07-24 19:17:48.895045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.364 19:17:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.364 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:43.623 19:17:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:43.881 [2024-07-24 19:17:49.486591] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.881 [2024-07-24 19:17:49.486625] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.881 [2024-07-24 19:17:49.486655] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:43.882 [2024-07-24 19:17:49.572937] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:44.140 [2024-07-24 19:17:49.798053] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.140 [2024-07-24 19:17:49.798086] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:44.707 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.708 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.966 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.967 [2024-07-24 19:17:50.511786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:44.967 [2024-07-24 19:17:50.512760] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:44.967 [2024-07-24 19:17:50.512810] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.967 [2024-07-24 19:17:50.598621] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:44.967 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.224 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:45.224 19:17:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:45.224 [2024-07-24 19:17:50.903148] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:45.224 [2024-07-24 19:17:50.903180] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:45.224 [2024-07-24 19:17:50.903193] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.159 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.159 [2024-07-24 19:17:51.788137] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:46.159 [2024-07-24 19:17:51.788181] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:46.159 [2024-07-24 19:17:51.789710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.159 [2024-07-24 19:17:51.789750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.159 [2024-07-24 19:17:51.789795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.159 [2024-07-24 19:17:51.789828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.159 [2024-07-24 19:17:51.789863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.159 [2024-07-24 19:17:51.789896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.159 [2024-07-24 19:17:51.789928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.159 [2024-07-24 19:17:51.789960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.159 [2024-07-24 19:17:51.790001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.160 [2024-07-24 19:17:51.799706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.160 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.160 [2024-07-24 19:17:51.809749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.160 [2024-07-24 19:17:51.810091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.160 [2024-07-24 19:17:51.810137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ce230 with addr=10.0.0.2, port=4420 00:25:46.160 [2024-07-24 19:17:51.810175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.160 [2024-07-24 19:17:51.810223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.160 [2024-07-24 19:17:51.810293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.160 [2024-07-24 19:17:51.810328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.160 [2024-07-24 19:17:51.810362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.160 [2024-07-24 19:17:51.810404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.160 [2024-07-24 19:17:51.819854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.160 [2024-07-24 19:17:51.820152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.160 [2024-07-24 19:17:51.820194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ce230 with addr=10.0.0.2, port=4420 00:25:46.160 [2024-07-24 19:17:51.820230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.160 [2024-07-24 19:17:51.820275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.160 [2024-07-24 19:17:51.820342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.160 [2024-07-24 19:17:51.820377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.160 [2024-07-24 19:17:51.820416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.160 [2024-07-24 19:17:51.820474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.160 [2024-07-24 19:17:51.829951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.160 [2024-07-24 19:17:51.830245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.160 [2024-07-24 19:17:51.830287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ce230 with addr=10.0.0.2, port=4420 00:25:46.160 [2024-07-24 19:17:51.830324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.160 [2024-07-24 19:17:51.830370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.160 [2024-07-24 19:17:51.830525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.160 [2024-07-24 19:17:51.830562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.160 [2024-07-24 19:17:51.830592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.160 [2024-07-24 19:17:51.830634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.160 [2024-07-24 19:17:51.840049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.160 [2024-07-24 19:17:51.840387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.160 [2024-07-24 19:17:51.840439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ce230 with addr=10.0.0.2, port=4420 00:25:46.160 [2024-07-24 19:17:51.840478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.160 [2024-07-24 19:17:51.840524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.160 [2024-07-24 19:17:51.840591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.160 [2024-07-24 19:17:51.840627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.160 [2024-07-24 19:17:51.840659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.160 [2024-07-24 19:17:51.840700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.160 [2024-07-24 19:17:51.850149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.160 [2024-07-24 19:17:51.850446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.160 [2024-07-24 19:17:51.850488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ce230 with addr=10.0.0.2, port=4420 00:25:46.160 [2024-07-24 19:17:51.850523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.160 [2024-07-24 19:17:51.850566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.160 [2024-07-24 19:17:51.850631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.160 [2024-07-24 19:17:51.850666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.160 [2024-07-24 19:17:51.850694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.160 [2024-07-24 19:17:51.850732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.419 [2024-07-24 19:17:51.860248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.419 [2024-07-24 19:17:51.860506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.419 [2024-07-24 19:17:51.860556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ce230 with addr=10.0.0.2, port=4420 00:25:46.419 [2024-07-24 19:17:51.860593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.419 [2024-07-24 19:17:51.860640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.419 [2024-07-24 19:17:51.860706] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.419 [2024-07-24 19:17:51.860742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.419 [2024-07-24 19:17:51.860773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.419 [2024-07-24 19:17:51.860812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:46.419 [2024-07-24 19:17:51.870345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.419 [2024-07-24 19:17:51.870594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.419 [2024-07-24 19:17:51.870636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ce230 with addr=10.0.0.2, port=4420 00:25:46.419 [2024-07-24 19:17:51.870670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ce230 is same with the state(5) to be set 00:25:46.419 [2024-07-24 19:17:51.870728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ce230 (9): Bad file descriptor 00:25:46.419 [2024-07-24 19:17:51.870797] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.419 [2024-07-24 19:17:51.870833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.419 [2024-07-24 19:17:51.870864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.419 [2024-07-24 19:17:51.870903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.419 [2024-07-24 19:17:51.874759] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:46.419 [2024-07-24 19:17:51.874800] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:46.419 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.420 19:17:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.420 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.678 19:17:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.613 [2024-07-24 19:17:53.254952] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:47.613 [2024-07-24 19:17:53.254987] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:47.613 [2024-07-24 19:17:53.255017] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:47.878 [2024-07-24 19:17:53.343304] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:47.878 [2024-07-24 19:17:53.450678] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:47.878 [2024-07-24 19:17:53.450728] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.878 request: 00:25:47.878 { 00:25:47.878 "name": "nvme", 00:25:47.878 "trtype": "tcp", 00:25:47.878 "traddr": "10.0.0.2", 00:25:47.878 "adrfam": "ipv4", 00:25:47.878 "trsvcid": "8009", 00:25:47.878 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:47.878 "wait_for_attach": true, 00:25:47.878 "method": "bdev_nvme_start_discovery", 00:25:47.878 "req_id": 1 00:25:47.878 } 00:25:47.878 Got JSON-RPC error response 00:25:47.878 response: 00:25:47.878 { 00:25:47.878 "code": -17, 00:25:47.878 "message": "File exists" 00:25:47.878 } 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:47.878 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.135 request: 00:25:48.135 { 00:25:48.135 "name": "nvme_second", 00:25:48.135 "trtype": "tcp", 00:25:48.135 "traddr": "10.0.0.2", 00:25:48.135 "adrfam": "ipv4", 00:25:48.135 "trsvcid": "8009", 00:25:48.135 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.135 "wait_for_attach": true, 00:25:48.135 "method": "bdev_nvme_start_discovery", 00:25:48.135 "req_id": 1 00:25:48.135 } 00:25:48.135 Got JSON-RPC error response 00:25:48.135 response: 00:25:48.135 { 00:25:48.135 "code": -17, 00:25:48.135 "message": "File exists" 00:25:48.135 } 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.135 19:17:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.068 [2024-07-24 19:17:54.754498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:17:54.754567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e95b0 with addr=10.0.0.2, port=8010 00:25:49.068 [2024-07-24 19:17:54.754613] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:49.068 [2024-07-24 19:17:54.754641] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:49.068 [2024-07-24 19:17:54.754668] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:50.442 [2024-07-24 19:17:55.757030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:17:55.757081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e95b0 with addr=10.0.0.2, port=8010 00:25:50.442 [2024-07-24 19:17:55.757124] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:50.442 [2024-07-24 19:17:55.757154] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:50.442 [2024-07-24 19:17:55.757183] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:51.405 [2024-07-24 19:17:56.759100] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:51.405 request: 00:25:51.405 { 00:25:51.405 "name": "nvme_second", 00:25:51.405 "trtype": "tcp", 00:25:51.405 "traddr": "10.0.0.2", 00:25:51.405 "adrfam": "ipv4", 00:25:51.405 "trsvcid": "8010", 00:25:51.405 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:51.405 "wait_for_attach": false, 00:25:51.405 "attach_timeout_ms": 3000, 00:25:51.405 "method": "bdev_nvme_start_discovery", 00:25:51.405 "req_id": 1 00:25:51.405 } 00:25:51.405 Got JSON-RPC error response 00:25:51.405 response: 00:25:51.405 { 00:25:51.405 "code": -110, 00:25:51.405 "message": "Connection timed out" 00:25:51.405 } 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.405 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1734936 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:51.406 rmmod nvme_tcp 00:25:51.406 rmmod nvme_fabrics 00:25:51.406 rmmod nvme_keyring 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1734894 ']' 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1734894 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1734894 ']' 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1734894 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1734894 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1734894' 00:25:51.406 killing process with pid 1734894 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1734894 00:25:51.406 19:17:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1734894 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.672 19:17:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:54.212 00:25:54.212 real 0m15.172s 00:25:54.212 user 0m21.779s 00:25:54.212 sys 0m3.936s 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.212 ************************************ 00:25:54.212 END TEST nvmf_host_discovery 00:25:54.212 ************************************ 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.212 ************************************ 00:25:54.212 START TEST nvmf_host_multipath_status 00:25:54.212 ************************************ 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:54.212 * Looking for test storage... 00:25:54.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.212 19:17:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.749 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:56.749 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:56.749 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:56.749 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:56.750 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:56.750 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:56.750 Found net devices under 0000:84:00.0: cvl_0_0 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:56.750 Found net devices under 0000:84:00.1: cvl_0_1 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:56.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:56.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:25:56.750 00:25:56.750 --- 10.0.0.2 ping statistics --- 00:25:56.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.750 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:56.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:56.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:56.750 00:25:56.750 --- 10.0.0.1 ping statistics --- 00:25:56.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:56.750 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:56.750 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1738290 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1738290 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1738290 ']' 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:56.751 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:56.751 [2024-07-24 19:18:02.333050] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:25:56.751 [2024-07-24 19:18:02.333143] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.751 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.751 [2024-07-24 19:18:02.416302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:57.010 [2024-07-24 19:18:02.556242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.010 [2024-07-24 19:18:02.556324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.010 [2024-07-24 19:18:02.556345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.010 [2024-07-24 19:18:02.556361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.010 [2024-07-24 19:18:02.556376] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.010 [2024-07-24 19:18:02.559458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.010 [2024-07-24 19:18:02.559474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1738290 00:25:57.268 19:18:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:57.526 [2024-07-24 19:18:03.061460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.526 19:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:58.096 Malloc0 00:25:58.096 19:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:58.354 19:18:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.612 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.870 [2024-07-24 19:18:04.457824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.871 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:59.129 [2024-07-24 19:18:04.814912] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1738617 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1738617 /var/tmp/bdevperf.sock 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1738617 ']' 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.388 19:18:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:59.646 19:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.646 19:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:59.646 19:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:59.904 19:18:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:00.839 Nvme0n1 00:26:00.839 19:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:01.406 Nvme0n1 00:26:01.406 19:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:01.406 19:18:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:03.307 19:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:03.307 19:18:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:03.873 19:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.132 19:18:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:05.067 19:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:05.067 19:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:05.067 19:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.067 19:18:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.325 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.325 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:05.325 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.325 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.583 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.583 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.583 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.583 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.149 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.149 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.149 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.149 19:18:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.715 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.715 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:06.715 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.715 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.974 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.974 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.974 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.974 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.260 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.260 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:07.260 19:18:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:07.519 19:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.777 19:18:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.152 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.410 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.410 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.410 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.410 19:18:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:09.976 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.976 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:09.976 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.976 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.235 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.235 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.235 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.235 19:18:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.493 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.493 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:10.493 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:10.493 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.752 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.752 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:10.752 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.319 19:18:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:11.577 19:18:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:12.512 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:12.512 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:12.512 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.512 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.077 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.077 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:13.077 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.077 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.335 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.335 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.335 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.335 19:18:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.902 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.902 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.902 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.902 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.160 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.160 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.160 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.160 19:18:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.418 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.418 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.418 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.418 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.985 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.985 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:14.985 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.243 19:18:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:15.810 19:18:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:16.745 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:16.745 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:16.745 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.745 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.003 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.003 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:17.003 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.003 19:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.568 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.568 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.568 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.569 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.829 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.829 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.829 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.829 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.087 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.087 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.087 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.087 19:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.345 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.345 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:18.345 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.345 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.912 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.912 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:18.912 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:19.170 19:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:19.736 19:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:20.670 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:20.670 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:20.670 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.670 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.262 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.262 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:21.262 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.262 19:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.525 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.525 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.525 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.525 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.783 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.783 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.783 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.783 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:22.350 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.350 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:22.350 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.350 19:18:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:22.916 19:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.916 19:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:22.916 19:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.916 19:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.481 19:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.481 19:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:23.481 19:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:24.047 19:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:24.613 19:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:25.547 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:25.547 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:25.547 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.547 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.806 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.806 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.806 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.806 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:26.373 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.373 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.373 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.373 19:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.631 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.632 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.632 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.632 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.891 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.891 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:26.891 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.891 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.149 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.150 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.150 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.150 19:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.408 19:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.408 19:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:27.975 19:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:27.975 19:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:28.233 19:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:28.492 19:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.867 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.434 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.435 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.435 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.435 19:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.693 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.693 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.693 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.693 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.952 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.952 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:30.952 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.952 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.210 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.210 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.210 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.210 19:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.777 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.777 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:31.777 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:32.036 19:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:32.602 19:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:33.537 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:33.537 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:33.537 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.537 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.796 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.796 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:33.796 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.796 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.363 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.363 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.363 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.363 19:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.622 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.622 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.622 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.622 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.882 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.882 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.882 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.882 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:35.156 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.156 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:35.156 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.156 19:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:35.425 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.425 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:35.425 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.991 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:36.249 19:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:37.184 19:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:37.184 19:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:37.184 19:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.184 19:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.751 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.751 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.751 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.751 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.009 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.009 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.009 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.009 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.268 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.268 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.268 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.268 19:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.526 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.526 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.526 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.526 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.784 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.784 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:38.784 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.784 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.042 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.042 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:39.042 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:39.301 19:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:39.867 19:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:40.802 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:40.802 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:40.802 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.802 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:41.060 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.060 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:41.060 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.060 19:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:41.626 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.627 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:41.627 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.627 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:41.885 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.885 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:41.885 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.885 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:42.452 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.452 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:42.452 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.452 19:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:42.709 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.709 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:42.709 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.709 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1738617 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1738617 ']' 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1738617 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1738617 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1738617' 00:26:42.968 killing process with pid 1738617 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1738617 00:26:42.968 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1738617 00:26:43.229 Connection closed with partial response: 00:26:43.229 00:26:43.229 00:26:43.229 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1738617 00:26:43.229 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.229 [2024-07-24 19:18:04.893787] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:26:43.229 [2024-07-24 19:18:04.893888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738617 ] 00:26:43.229 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.230 [2024-07-24 19:18:04.978536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.230 [2024-07-24 19:18:05.118649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:43.230 Running I/O for 90 seconds... 00:26:43.230 [2024-07-24 19:18:24.655546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.230 [2024-07-24 19:18:24.655612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.655702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.655732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.655765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.655788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.655819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.655842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.655873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.655896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.655927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.655949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.655979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.656548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.656571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.657953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.657985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:43.230 [2024-07-24 19:18:24.658569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.230 [2024-07-24 19:18:24.658592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.658624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.658647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.658679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.658702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.658733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.658789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.658811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.658845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.658868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.658899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.658922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.658954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:127208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.658976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.659959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.659993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.231 [2024-07-24 19:18:24.660254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.231 [2024-07-24 19:18:24.660313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.660963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.660998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.231 [2024-07-24 19:18:24.661020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.231 [2024-07-24 19:18:24.661055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.661947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.661970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:127592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.662938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.662961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:43.232 [2024-07-24 19:18:24.663616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.232 [2024-07-24 19:18:24.663640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.663679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.663702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.663754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.663779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.663818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.663841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.663881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.663904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.663944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.663968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:24.664546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:24.664569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.342387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.342510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.342565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.342618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.342693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.342747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.342811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.342863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.342915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.342945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.342968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.345100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.345164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.233 [2024-07-24 19:18:45.345217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.233 [2024-07-24 19:18:45.345800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:43.233 [2024-07-24 19:18:45.345829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.234 [2024-07-24 19:18:45.345851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.345880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.234 [2024-07-24 19:18:45.345902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.345932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.234 [2024-07-24 19:18:45.345953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.345983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.234 [2024-07-24 19:18:45.346005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.346034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.234 [2024-07-24 19:18:45.346056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.346090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.234 [2024-07-24 19:18:45.346113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.346142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.234 [2024-07-24 19:18:45.346164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.346194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:43.234 [2024-07-24 19:18:45.346217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.234 [2024-07-24 19:18:45.347245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.234 [2024-07-24 19:18:45.347280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:43.234 Received shutdown signal, test time was about 41.448164 seconds 00:26:43.234 00:26:43.234 Latency(us) 00:26:43.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.234 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:43.234 Verification LBA range: start 0x0 length 0x4000 00:26:43.234 Nvme0n1 : 41.45 5847.10 22.84 0.00 0.00 21852.78 606.81 6039797.76 00:26:43.234 =================================================================================================================== 00:26:43.234 Total : 5847.10 22.84 0.00 0.00 21852.78 606.81 6039797.76 00:26:43.234 19:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:43.799 rmmod nvme_tcp 00:26:43.799 rmmod nvme_fabrics 00:26:43.799 rmmod nvme_keyring 00:26:43.799 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1738290 ']' 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1738290 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1738290 ']' 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1738290 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1738290 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1738290' 00:26:44.059 killing process with pid 1738290 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1738290 00:26:44.059 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1738290 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.317 19:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:46.849 00:26:46.849 real 0m52.619s 00:26:46.849 user 2m42.273s 00:26:46.849 sys 0m14.308s 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:46.849 ************************************ 00:26:46.849 END TEST nvmf_host_multipath_status 00:26:46.849 ************************************ 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.849 ************************************ 00:26:46.849 START TEST nvmf_discovery_remove_ifc 00:26:46.849 ************************************ 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:46.849 * Looking for test storage... 00:26:46.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:46.849 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:46.850 19:18:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:49.395 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:49.395 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.395 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:49.396 Found net devices under 0000:84:00.0: cvl_0_0 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:49.396 Found net devices under 0000:84:00.1: cvl_0_1 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:26:49.396 00:26:49.396 --- 10.0.0.2 ping statistics --- 00:26:49.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.396 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:26:49.396 00:26:49.396 --- 10.0.0.1 ping statistics --- 00:26:49.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.396 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1746373 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1746373 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1746373 ']' 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.396 19:18:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.396 [2024-07-24 19:18:55.010739] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:26:49.396 [2024-07-24 19:18:55.010914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.396 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.655 [2024-07-24 19:18:55.133015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.655 [2024-07-24 19:18:55.275357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.655 [2024-07-24 19:18:55.275425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.655 [2024-07-24 19:18:55.275478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.655 [2024-07-24 19:18:55.275496] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.655 [2024-07-24 19:18:55.275511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.655 [2024-07-24 19:18:55.275547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.913 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:49.913 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:49.913 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:49.913 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:49.913 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.913 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.914 [2024-07-24 19:18:55.456251] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.914 [2024-07-24 19:18:55.464484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:49.914 null0 00:26:49.914 [2024-07-24 19:18:55.496395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1746518 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1746518 /tmp/host.sock 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1746518 ']' 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:49.914 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:49.914 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.914 [2024-07-24 19:18:55.575631] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:26:49.914 [2024-07-24 19:18:55.575724] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746518 ] 00:26:50.193 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.193 [2024-07-24 19:18:55.659217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.193 [2024-07-24 19:18:55.798684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.193 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.193 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:50.193 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.193 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:50.193 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.193 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.459 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.459 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:50.459 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.459 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.459 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.459 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:50.460 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.460 19:18:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.395 [2024-07-24 19:18:57.055385] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:51.395 [2024-07-24 19:18:57.055417] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:51.395 [2024-07-24 19:18:57.055452] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.653 [2024-07-24 19:18:57.182901] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:51.911 [2024-07-24 19:18:57.408780] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:51.911 [2024-07-24 19:18:57.408863] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:51.911 [2024-07-24 19:18:57.408917] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:51.912 [2024-07-24 19:18:57.408948] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:51.912 [2024-07-24 19:18:57.408980] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.912 [2024-07-24 19:18:57.454259] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1691e50 was disconnected and freed. delete nvme_qpair. 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:51.912 19:18:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.286 19:18:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:54.221 19:18:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:55.155 19:19:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.088 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.345 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:56.345 19:19:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.301 [2024-07-24 19:19:02.849237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:57.301 [2024-07-24 19:19:02.849326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.301 [2024-07-24 19:19:02.849368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.301 [2024-07-24 19:19:02.849402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.301 [2024-07-24 19:19:02.849445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.301 [2024-07-24 19:19:02.849495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.301 [2024-07-24 19:19:02.849527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.301 [2024-07-24 19:19:02.849560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.301 [2024-07-24 19:19:02.849591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.301 [2024-07-24 19:19:02.849635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.301 [2024-07-24 19:19:02.849667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.301 [2024-07-24 19:19:02.849699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1658890 is same with the state(5) to be set 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.301 [2024-07-24 19:19:02.859256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1658890 (9): Bad file descriptor 00:26:57.301 [2024-07-24 19:19:02.869307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:57.301 19:19:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.235 [2024-07-24 19:19:03.909546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.235 [2024-07-24 19:19:03.909623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1658890 with addr=10.0.0.2, port=4420 00:26:58.235 [2024-07-24 19:19:03.909669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1658890 is same with the state(5) to be set 00:26:58.235 [2024-07-24 19:19:03.909744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1658890 (9): Bad file descriptor 00:26:58.235 [2024-07-24 19:19:03.909873] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:58.235 [2024-07-24 19:19:03.909945] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:58.235 [2024-07-24 19:19:03.909979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:58.235 [2024-07-24 19:19:03.910014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:58.235 [2024-07-24 19:19:03.910072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.235 [2024-07-24 19:19:03.910108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:58.235 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.492 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:58.493 19:19:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:59.427 [2024-07-24 19:19:04.912637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:59.427 [2024-07-24 19:19:04.912676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:59.427 [2024-07-24 19:19:04.912716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:59.427 [2024-07-24 19:19:04.912748] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:59.427 [2024-07-24 19:19:04.912805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.427 [2024-07-24 19:19:04.912869] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:59.427 [2024-07-24 19:19:04.912933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.427 [2024-07-24 19:19:04.912972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.427 [2024-07-24 19:19:04.913013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.427 [2024-07-24 19:19:04.913046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.427 [2024-07-24 19:19:04.913080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.427 [2024-07-24 19:19:04.913110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.427 [2024-07-24 19:19:04.913143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.427 [2024-07-24 19:19:04.913178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.427 [2024-07-24 19:19:04.913210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.427 [2024-07-24 19:19:04.913243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.427 [2024-07-24 19:19:04.913275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:59.427 [2024-07-24 19:19:04.913359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657cf0 (9): Bad file descriptor 00:26:59.427 [2024-07-24 19:19:04.914348] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:59.427 [2024-07-24 19:19:04.914384] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.427 19:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:59.427 19:19:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:00.799 19:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.364 [2024-07-24 19:19:06.931182] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:01.364 [2024-07-24 19:19:06.931213] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:01.364 [2024-07-24 19:19:06.931245] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:01.364 [2024-07-24 19:19:07.019572] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:01.622 19:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.622 [2024-07-24 19:19:07.244826] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:01.622 [2024-07-24 19:19:07.244893] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:01.622 [2024-07-24 19:19:07.244940] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:01.622 [2024-07-24 19:19:07.244970] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:01.622 [2024-07-24 19:19:07.244988] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:01.622 [2024-07-24 19:19:07.250130] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x169b670 was disconnected and freed. delete nvme_qpair. 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.556 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1746518 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1746518 ']' 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1746518 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1746518 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1746518' 00:27:02.815 killing process with pid 1746518 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1746518 00:27:02.815 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1746518 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.073 rmmod nvme_tcp 00:27:03.073 rmmod nvme_fabrics 00:27:03.073 rmmod nvme_keyring 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1746373 ']' 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1746373 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1746373 ']' 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1746373 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1746373 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1746373' 00:27:03.073 killing process with pid 1746373 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1746373 00:27:03.073 19:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1746373 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.640 19:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.547 00:27:05.547 real 0m19.038s 00:27:05.547 user 0m27.181s 00:27:05.547 sys 0m3.810s 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.547 ************************************ 00:27:05.547 END TEST nvmf_discovery_remove_ifc 00:27:05.547 ************************************ 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.547 ************************************ 00:27:05.547 START TEST nvmf_identify_kernel_target 00:27:05.547 ************************************ 00:27:05.547 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:05.832 * Looking for test storage... 00:27:05.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:05.832 19:19:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:08.368 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.368 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:08.368 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:08.369 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:08.369 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:08.369 Found net devices under 0000:84:00.0: cvl_0_0 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:08.369 Found net devices under 0000:84:00.1: cvl_0_1 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.369 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:08.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:27:08.629 00:27:08.629 --- 10.0.0.2 ping statistics --- 00:27:08.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.629 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:08.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:27:08.629 00:27:08.629 --- 10.0.0.1 ping statistics --- 00:27:08.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.629 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:08.629 19:19:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:10.532 Waiting for block devices as requested 00:27:10.532 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:10.532 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:10.532 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:10.791 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:10.791 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:10.791 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:10.791 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:11.050 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:11.050 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:11.050 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:11.308 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:11.308 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:11.308 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:11.567 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:11.567 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:11.567 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:11.567 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:11.825 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:11.826 No valid GPT data, bailing 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:11.826 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:12.085 00:27:12.085 Discovery Log Number of Records 2, Generation counter 2 00:27:12.085 =====Discovery Log Entry 0====== 00:27:12.085 trtype: tcp 00:27:12.085 adrfam: ipv4 00:27:12.085 subtype: current discovery subsystem 00:27:12.085 treq: not specified, sq flow control disable supported 00:27:12.085 portid: 1 00:27:12.085 trsvcid: 4420 00:27:12.085 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:12.085 traddr: 10.0.0.1 00:27:12.085 eflags: none 00:27:12.085 sectype: none 00:27:12.085 =====Discovery Log Entry 1====== 00:27:12.085 trtype: tcp 00:27:12.085 adrfam: ipv4 00:27:12.085 subtype: nvme subsystem 00:27:12.085 treq: not specified, sq flow control disable supported 00:27:12.085 portid: 1 00:27:12.085 trsvcid: 4420 00:27:12.085 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:12.085 traddr: 10.0.0.1 00:27:12.085 eflags: none 00:27:12.085 sectype: none 00:27:12.085 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:12.085 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:12.085 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.085 ===================================================== 00:27:12.085 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:12.085 ===================================================== 00:27:12.085 Controller Capabilities/Features 00:27:12.085 ================================ 00:27:12.085 Vendor ID: 0000 00:27:12.085 Subsystem Vendor ID: 0000 00:27:12.085 Serial Number: fcf37df8c550fdbdb16c 00:27:12.085 Model Number: Linux 00:27:12.085 Firmware Version: 6.7.0-68 00:27:12.085 Recommended Arb Burst: 0 00:27:12.085 IEEE OUI Identifier: 00 00 00 00:27:12.085 Multi-path I/O 00:27:12.085 May have multiple subsystem ports: No 00:27:12.085 May have multiple controllers: No 00:27:12.085 Associated with SR-IOV VF: No 00:27:12.085 Max Data Transfer Size: Unlimited 00:27:12.085 Max Number of Namespaces: 0 00:27:12.085 Max Number of I/O Queues: 1024 00:27:12.085 NVMe Specification Version (VS): 1.3 00:27:12.085 NVMe Specification Version (Identify): 1.3 00:27:12.085 Maximum Queue Entries: 1024 00:27:12.085 Contiguous Queues Required: No 00:27:12.085 Arbitration Mechanisms Supported 00:27:12.085 Weighted Round Robin: Not Supported 00:27:12.085 Vendor Specific: Not Supported 00:27:12.085 Reset Timeout: 7500 ms 00:27:12.085 Doorbell Stride: 4 bytes 00:27:12.085 NVM Subsystem Reset: Not Supported 00:27:12.085 Command Sets Supported 00:27:12.085 NVM Command Set: Supported 00:27:12.085 Boot Partition: Not Supported 00:27:12.085 Memory Page Size Minimum: 4096 bytes 00:27:12.085 Memory Page Size Maximum: 4096 bytes 00:27:12.085 Persistent Memory Region: Not Supported 00:27:12.085 Optional Asynchronous Events Supported 00:27:12.085 Namespace Attribute Notices: Not Supported 00:27:12.085 Firmware Activation Notices: Not Supported 00:27:12.085 ANA Change Notices: Not Supported 00:27:12.085 PLE Aggregate Log Change Notices: Not Supported 00:27:12.085 LBA Status Info Alert Notices: Not Supported 00:27:12.085 EGE Aggregate Log Change Notices: Not Supported 00:27:12.085 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.085 Zone Descriptor Change Notices: Not Supported 00:27:12.085 Discovery Log Change Notices: Supported 00:27:12.085 Controller Attributes 00:27:12.085 128-bit Host Identifier: Not Supported 00:27:12.085 Non-Operational Permissive Mode: Not Supported 00:27:12.085 NVM Sets: Not Supported 00:27:12.085 Read Recovery Levels: Not Supported 00:27:12.085 Endurance Groups: Not Supported 00:27:12.085 Predictable Latency Mode: Not Supported 00:27:12.085 Traffic Based Keep ALive: Not Supported 00:27:12.085 Namespace Granularity: Not Supported 00:27:12.085 SQ Associations: Not Supported 00:27:12.085 UUID List: Not Supported 00:27:12.085 Multi-Domain Subsystem: Not Supported 00:27:12.085 Fixed Capacity Management: Not Supported 00:27:12.085 Variable Capacity Management: Not Supported 00:27:12.085 Delete Endurance Group: Not Supported 00:27:12.085 Delete NVM Set: Not Supported 00:27:12.085 Extended LBA Formats Supported: Not Supported 00:27:12.085 Flexible Data Placement Supported: Not Supported 00:27:12.085 00:27:12.085 Controller Memory Buffer Support 00:27:12.085 ================================ 00:27:12.085 Supported: No 00:27:12.085 00:27:12.085 Persistent Memory Region Support 00:27:12.085 ================================ 00:27:12.085 Supported: No 00:27:12.085 00:27:12.085 Admin Command Set Attributes 00:27:12.085 ============================ 00:27:12.085 Security Send/Receive: Not Supported 00:27:12.085 Format NVM: Not Supported 00:27:12.085 Firmware Activate/Download: Not Supported 00:27:12.085 Namespace Management: Not Supported 00:27:12.085 Device Self-Test: Not Supported 00:27:12.085 Directives: Not Supported 00:27:12.085 NVMe-MI: Not Supported 00:27:12.085 Virtualization Management: Not Supported 00:27:12.085 Doorbell Buffer Config: Not Supported 00:27:12.085 Get LBA Status Capability: Not Supported 00:27:12.085 Command & Feature Lockdown Capability: Not Supported 00:27:12.085 Abort Command Limit: 1 00:27:12.085 Async Event Request Limit: 1 00:27:12.085 Number of Firmware Slots: N/A 00:27:12.085 Firmware Slot 1 Read-Only: N/A 00:27:12.085 Firmware Activation Without Reset: N/A 00:27:12.085 Multiple Update Detection Support: N/A 00:27:12.085 Firmware Update Granularity: No Information Provided 00:27:12.085 Per-Namespace SMART Log: No 00:27:12.085 Asymmetric Namespace Access Log Page: Not Supported 00:27:12.086 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:12.086 Command Effects Log Page: Not Supported 00:27:12.086 Get Log Page Extended Data: Supported 00:27:12.086 Telemetry Log Pages: Not Supported 00:27:12.086 Persistent Event Log Pages: Not Supported 00:27:12.086 Supported Log Pages Log Page: May Support 00:27:12.086 Commands Supported & Effects Log Page: Not Supported 00:27:12.086 Feature Identifiers & Effects Log Page:May Support 00:27:12.086 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.086 Data Area 4 for Telemetry Log: Not Supported 00:27:12.086 Error Log Page Entries Supported: 1 00:27:12.086 Keep Alive: Not Supported 00:27:12.086 00:27:12.086 NVM Command Set Attributes 00:27:12.086 ========================== 00:27:12.086 Submission Queue Entry Size 00:27:12.086 Max: 1 00:27:12.086 Min: 1 00:27:12.086 Completion Queue Entry Size 00:27:12.086 Max: 1 00:27:12.086 Min: 1 00:27:12.086 Number of Namespaces: 0 00:27:12.086 Compare Command: Not Supported 00:27:12.086 Write Uncorrectable Command: Not Supported 00:27:12.086 Dataset Management Command: Not Supported 00:27:12.086 Write Zeroes Command: Not Supported 00:27:12.086 Set Features Save Field: Not Supported 00:27:12.086 Reservations: Not Supported 00:27:12.086 Timestamp: Not Supported 00:27:12.086 Copy: Not Supported 00:27:12.086 Volatile Write Cache: Not Present 00:27:12.086 Atomic Write Unit (Normal): 1 00:27:12.086 Atomic Write Unit (PFail): 1 00:27:12.086 Atomic Compare & Write Unit: 1 00:27:12.086 Fused Compare & Write: Not Supported 00:27:12.086 Scatter-Gather List 00:27:12.086 SGL Command Set: Supported 00:27:12.086 SGL Keyed: Not Supported 00:27:12.086 SGL Bit Bucket Descriptor: Not Supported 00:27:12.086 SGL Metadata Pointer: Not Supported 00:27:12.086 Oversized SGL: Not Supported 00:27:12.086 SGL Metadata Address: Not Supported 00:27:12.086 SGL Offset: Supported 00:27:12.086 Transport SGL Data Block: Not Supported 00:27:12.086 Replay Protected Memory Block: Not Supported 00:27:12.086 00:27:12.086 Firmware Slot Information 00:27:12.086 ========================= 00:27:12.086 Active slot: 0 00:27:12.086 00:27:12.086 00:27:12.086 Error Log 00:27:12.086 ========= 00:27:12.086 00:27:12.086 Active Namespaces 00:27:12.086 ================= 00:27:12.086 Discovery Log Page 00:27:12.086 ================== 00:27:12.086 Generation Counter: 2 00:27:12.086 Number of Records: 2 00:27:12.086 Record Format: 0 00:27:12.086 00:27:12.086 Discovery Log Entry 0 00:27:12.086 ---------------------- 00:27:12.086 Transport Type: 3 (TCP) 00:27:12.086 Address Family: 1 (IPv4) 00:27:12.086 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:12.086 Entry Flags: 00:27:12.086 Duplicate Returned Information: 0 00:27:12.086 Explicit Persistent Connection Support for Discovery: 0 00:27:12.086 Transport Requirements: 00:27:12.086 Secure Channel: Not Specified 00:27:12.086 Port ID: 1 (0x0001) 00:27:12.086 Controller ID: 65535 (0xffff) 00:27:12.086 Admin Max SQ Size: 32 00:27:12.086 Transport Service Identifier: 4420 00:27:12.086 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:12.086 Transport Address: 10.0.0.1 00:27:12.086 Discovery Log Entry 1 00:27:12.086 ---------------------- 00:27:12.086 Transport Type: 3 (TCP) 00:27:12.086 Address Family: 1 (IPv4) 00:27:12.086 Subsystem Type: 2 (NVM Subsystem) 00:27:12.086 Entry Flags: 00:27:12.086 Duplicate Returned Information: 0 00:27:12.086 Explicit Persistent Connection Support for Discovery: 0 00:27:12.086 Transport Requirements: 00:27:12.086 Secure Channel: Not Specified 00:27:12.086 Port ID: 1 (0x0001) 00:27:12.086 Controller ID: 65535 (0xffff) 00:27:12.086 Admin Max SQ Size: 32 00:27:12.086 Transport Service Identifier: 4420 00:27:12.086 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:12.086 Transport Address: 10.0.0.1 00:27:12.086 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.086 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.399 get_feature(0x01) failed 00:27:12.399 get_feature(0x02) failed 00:27:12.399 get_feature(0x04) failed 00:27:12.399 ===================================================== 00:27:12.399 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:12.399 ===================================================== 00:27:12.399 Controller Capabilities/Features 00:27:12.399 ================================ 00:27:12.399 Vendor ID: 0000 00:27:12.399 Subsystem Vendor ID: 0000 00:27:12.399 Serial Number: a67d719234954f303a83 00:27:12.399 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:12.399 Firmware Version: 6.7.0-68 00:27:12.399 Recommended Arb Burst: 6 00:27:12.399 IEEE OUI Identifier: 00 00 00 00:27:12.399 Multi-path I/O 00:27:12.399 May have multiple subsystem ports: Yes 00:27:12.399 May have multiple controllers: Yes 00:27:12.399 Associated with SR-IOV VF: No 00:27:12.399 Max Data Transfer Size: Unlimited 00:27:12.399 Max Number of Namespaces: 1024 00:27:12.399 Max Number of I/O Queues: 128 00:27:12.399 NVMe Specification Version (VS): 1.3 00:27:12.399 NVMe Specification Version (Identify): 1.3 00:27:12.399 Maximum Queue Entries: 1024 00:27:12.399 Contiguous Queues Required: No 00:27:12.399 Arbitration Mechanisms Supported 00:27:12.399 Weighted Round Robin: Not Supported 00:27:12.399 Vendor Specific: Not Supported 00:27:12.399 Reset Timeout: 7500 ms 00:27:12.399 Doorbell Stride: 4 bytes 00:27:12.399 NVM Subsystem Reset: Not Supported 00:27:12.399 Command Sets Supported 00:27:12.399 NVM Command Set: Supported 00:27:12.399 Boot Partition: Not Supported 00:27:12.399 Memory Page Size Minimum: 4096 bytes 00:27:12.399 Memory Page Size Maximum: 4096 bytes 00:27:12.399 Persistent Memory Region: Not Supported 00:27:12.399 Optional Asynchronous Events Supported 00:27:12.399 Namespace Attribute Notices: Supported 00:27:12.399 Firmware Activation Notices: Not Supported 00:27:12.399 ANA Change Notices: Supported 00:27:12.399 PLE Aggregate Log Change Notices: Not Supported 00:27:12.399 LBA Status Info Alert Notices: Not Supported 00:27:12.399 EGE Aggregate Log Change Notices: Not Supported 00:27:12.399 Normal NVM Subsystem Shutdown event: Not Supported 00:27:12.399 Zone Descriptor Change Notices: Not Supported 00:27:12.399 Discovery Log Change Notices: Not Supported 00:27:12.399 Controller Attributes 00:27:12.399 128-bit Host Identifier: Supported 00:27:12.399 Non-Operational Permissive Mode: Not Supported 00:27:12.399 NVM Sets: Not Supported 00:27:12.399 Read Recovery Levels: Not Supported 00:27:12.399 Endurance Groups: Not Supported 00:27:12.399 Predictable Latency Mode: Not Supported 00:27:12.399 Traffic Based Keep ALive: Supported 00:27:12.399 Namespace Granularity: Not Supported 00:27:12.399 SQ Associations: Not Supported 00:27:12.399 UUID List: Not Supported 00:27:12.399 Multi-Domain Subsystem: Not Supported 00:27:12.399 Fixed Capacity Management: Not Supported 00:27:12.399 Variable Capacity Management: Not Supported 00:27:12.399 Delete Endurance Group: Not Supported 00:27:12.399 Delete NVM Set: Not Supported 00:27:12.399 Extended LBA Formats Supported: Not Supported 00:27:12.399 Flexible Data Placement Supported: Not Supported 00:27:12.399 00:27:12.399 Controller Memory Buffer Support 00:27:12.399 ================================ 00:27:12.399 Supported: No 00:27:12.399 00:27:12.399 Persistent Memory Region Support 00:27:12.399 ================================ 00:27:12.399 Supported: No 00:27:12.400 00:27:12.400 Admin Command Set Attributes 00:27:12.400 ============================ 00:27:12.400 Security Send/Receive: Not Supported 00:27:12.400 Format NVM: Not Supported 00:27:12.400 Firmware Activate/Download: Not Supported 00:27:12.400 Namespace Management: Not Supported 00:27:12.400 Device Self-Test: Not Supported 00:27:12.400 Directives: Not Supported 00:27:12.400 NVMe-MI: Not Supported 00:27:12.400 Virtualization Management: Not Supported 00:27:12.400 Doorbell Buffer Config: Not Supported 00:27:12.400 Get LBA Status Capability: Not Supported 00:27:12.400 Command & Feature Lockdown Capability: Not Supported 00:27:12.400 Abort Command Limit: 4 00:27:12.400 Async Event Request Limit: 4 00:27:12.400 Number of Firmware Slots: N/A 00:27:12.400 Firmware Slot 1 Read-Only: N/A 00:27:12.400 Firmware Activation Without Reset: N/A 00:27:12.400 Multiple Update Detection Support: N/A 00:27:12.400 Firmware Update Granularity: No Information Provided 00:27:12.400 Per-Namespace SMART Log: Yes 00:27:12.400 Asymmetric Namespace Access Log Page: Supported 00:27:12.400 ANA Transition Time : 10 sec 00:27:12.400 00:27:12.400 Asymmetric Namespace Access Capabilities 00:27:12.400 ANA Optimized State : Supported 00:27:12.400 ANA Non-Optimized State : Supported 00:27:12.400 ANA Inaccessible State : Supported 00:27:12.400 ANA Persistent Loss State : Supported 00:27:12.400 ANA Change State : Supported 00:27:12.400 ANAGRPID is not changed : No 00:27:12.400 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:12.400 00:27:12.400 ANA Group Identifier Maximum : 128 00:27:12.400 Number of ANA Group Identifiers : 128 00:27:12.400 Max Number of Allowed Namespaces : 1024 00:27:12.400 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:12.400 Command Effects Log Page: Supported 00:27:12.400 Get Log Page Extended Data: Supported 00:27:12.400 Telemetry Log Pages: Not Supported 00:27:12.400 Persistent Event Log Pages: Not Supported 00:27:12.400 Supported Log Pages Log Page: May Support 00:27:12.400 Commands Supported & Effects Log Page: Not Supported 00:27:12.400 Feature Identifiers & Effects Log Page:May Support 00:27:12.400 NVMe-MI Commands & Effects Log Page: May Support 00:27:12.400 Data Area 4 for Telemetry Log: Not Supported 00:27:12.400 Error Log Page Entries Supported: 128 00:27:12.400 Keep Alive: Supported 00:27:12.400 Keep Alive Granularity: 1000 ms 00:27:12.400 00:27:12.400 NVM Command Set Attributes 00:27:12.400 ========================== 00:27:12.400 Submission Queue Entry Size 00:27:12.400 Max: 64 00:27:12.400 Min: 64 00:27:12.400 Completion Queue Entry Size 00:27:12.400 Max: 16 00:27:12.400 Min: 16 00:27:12.400 Number of Namespaces: 1024 00:27:12.400 Compare Command: Not Supported 00:27:12.400 Write Uncorrectable Command: Not Supported 00:27:12.400 Dataset Management Command: Supported 00:27:12.400 Write Zeroes Command: Supported 00:27:12.400 Set Features Save Field: Not Supported 00:27:12.400 Reservations: Not Supported 00:27:12.400 Timestamp: Not Supported 00:27:12.400 Copy: Not Supported 00:27:12.400 Volatile Write Cache: Present 00:27:12.400 Atomic Write Unit (Normal): 1 00:27:12.400 Atomic Write Unit (PFail): 1 00:27:12.400 Atomic Compare & Write Unit: 1 00:27:12.400 Fused Compare & Write: Not Supported 00:27:12.400 Scatter-Gather List 00:27:12.400 SGL Command Set: Supported 00:27:12.400 SGL Keyed: Not Supported 00:27:12.400 SGL Bit Bucket Descriptor: Not Supported 00:27:12.400 SGL Metadata Pointer: Not Supported 00:27:12.400 Oversized SGL: Not Supported 00:27:12.400 SGL Metadata Address: Not Supported 00:27:12.400 SGL Offset: Supported 00:27:12.400 Transport SGL Data Block: Not Supported 00:27:12.400 Replay Protected Memory Block: Not Supported 00:27:12.400 00:27:12.400 Firmware Slot Information 00:27:12.400 ========================= 00:27:12.400 Active slot: 0 00:27:12.400 00:27:12.400 Asymmetric Namespace Access 00:27:12.400 =========================== 00:27:12.400 Change Count : 0 00:27:12.400 Number of ANA Group Descriptors : 1 00:27:12.400 ANA Group Descriptor : 0 00:27:12.400 ANA Group ID : 1 00:27:12.400 Number of NSID Values : 1 00:27:12.400 Change Count : 0 00:27:12.400 ANA State : 1 00:27:12.400 Namespace Identifier : 1 00:27:12.400 00:27:12.400 Commands Supported and Effects 00:27:12.400 ============================== 00:27:12.400 Admin Commands 00:27:12.400 -------------- 00:27:12.400 Get Log Page (02h): Supported 00:27:12.400 Identify (06h): Supported 00:27:12.400 Abort (08h): Supported 00:27:12.400 Set Features (09h): Supported 00:27:12.400 Get Features (0Ah): Supported 00:27:12.400 Asynchronous Event Request (0Ch): Supported 00:27:12.400 Keep Alive (18h): Supported 00:27:12.400 I/O Commands 00:27:12.400 ------------ 00:27:12.400 Flush (00h): Supported 00:27:12.400 Write (01h): Supported LBA-Change 00:27:12.400 Read (02h): Supported 00:27:12.400 Write Zeroes (08h): Supported LBA-Change 00:27:12.400 Dataset Management (09h): Supported 00:27:12.400 00:27:12.400 Error Log 00:27:12.400 ========= 00:27:12.400 Entry: 0 00:27:12.400 Error Count: 0x3 00:27:12.400 Submission Queue Id: 0x0 00:27:12.400 Command Id: 0x5 00:27:12.400 Phase Bit: 0 00:27:12.400 Status Code: 0x2 00:27:12.400 Status Code Type: 0x0 00:27:12.400 Do Not Retry: 1 00:27:12.400 Error Location: 0x28 00:27:12.400 LBA: 0x0 00:27:12.400 Namespace: 0x0 00:27:12.400 Vendor Log Page: 0x0 00:27:12.400 ----------- 00:27:12.400 Entry: 1 00:27:12.400 Error Count: 0x2 00:27:12.400 Submission Queue Id: 0x0 00:27:12.400 Command Id: 0x5 00:27:12.400 Phase Bit: 0 00:27:12.400 Status Code: 0x2 00:27:12.400 Status Code Type: 0x0 00:27:12.400 Do Not Retry: 1 00:27:12.400 Error Location: 0x28 00:27:12.400 LBA: 0x0 00:27:12.400 Namespace: 0x0 00:27:12.400 Vendor Log Page: 0x0 00:27:12.400 ----------- 00:27:12.400 Entry: 2 00:27:12.400 Error Count: 0x1 00:27:12.400 Submission Queue Id: 0x0 00:27:12.400 Command Id: 0x4 00:27:12.400 Phase Bit: 0 00:27:12.400 Status Code: 0x2 00:27:12.400 Status Code Type: 0x0 00:27:12.400 Do Not Retry: 1 00:27:12.400 Error Location: 0x28 00:27:12.400 LBA: 0x0 00:27:12.400 Namespace: 0x0 00:27:12.400 Vendor Log Page: 0x0 00:27:12.400 00:27:12.400 Number of Queues 00:27:12.400 ================ 00:27:12.400 Number of I/O Submission Queues: 128 00:27:12.400 Number of I/O Completion Queues: 128 00:27:12.400 00:27:12.400 ZNS Specific Controller Data 00:27:12.400 ============================ 00:27:12.400 Zone Append Size Limit: 0 00:27:12.400 00:27:12.400 00:27:12.400 Active Namespaces 00:27:12.400 ================= 00:27:12.400 get_feature(0x05) failed 00:27:12.400 Namespace ID:1 00:27:12.400 Command Set Identifier: NVM (00h) 00:27:12.400 Deallocate: Supported 00:27:12.400 Deallocated/Unwritten Error: Not Supported 00:27:12.400 Deallocated Read Value: Unknown 00:27:12.400 Deallocate in Write Zeroes: Not Supported 00:27:12.400 Deallocated Guard Field: 0xFFFF 00:27:12.400 Flush: Supported 00:27:12.400 Reservation: Not Supported 00:27:12.400 Namespace Sharing Capabilities: Multiple Controllers 00:27:12.400 Size (in LBAs): 1953525168 (931GiB) 00:27:12.400 Capacity (in LBAs): 1953525168 (931GiB) 00:27:12.400 Utilization (in LBAs): 1953525168 (931GiB) 00:27:12.400 UUID: 4770163d-6be9-40a8-836a-fb262d70903b 00:27:12.400 Thin Provisioning: Not Supported 00:27:12.400 Per-NS Atomic Units: Yes 00:27:12.400 Atomic Boundary Size (Normal): 0 00:27:12.400 Atomic Boundary Size (PFail): 0 00:27:12.400 Atomic Boundary Offset: 0 00:27:12.400 NGUID/EUI64 Never Reused: No 00:27:12.400 ANA group ID: 1 00:27:12.400 Namespace Write Protected: No 00:27:12.400 Number of LBA Formats: 1 00:27:12.400 Current LBA Format: LBA Format #00 00:27:12.400 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:12.400 00:27:12.400 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:12.400 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.400 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:12.400 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.400 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:12.400 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.400 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.400 rmmod nvme_tcp 00:27:12.401 rmmod nvme_fabrics 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.401 19:19:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.302 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.302 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:14.302 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:14.302 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:14.303 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.303 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:14.303 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:14.303 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:14.303 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:14.303 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:14.303 19:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:16.205 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:16.205 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:16.205 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:16.205 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:16.205 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:16.205 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:16.205 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:16.205 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:16.205 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:17.141 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:17.141 00:27:17.141 real 0m11.617s 00:27:17.141 user 0m2.582s 00:27:17.141 sys 0m5.066s 00:27:17.141 19:19:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:17.141 19:19:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:17.141 ************************************ 00:27:17.141 END TEST nvmf_identify_kernel_target 00:27:17.141 ************************************ 00:27:17.141 19:19:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:17.141 19:19:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:17.141 19:19:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:17.141 19:19:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.401 ************************************ 00:27:17.401 START TEST nvmf_auth_host 00:27:17.401 ************************************ 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:17.401 * Looking for test storage... 00:27:17.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.401 19:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.684 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:20.685 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:20.685 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:20.685 Found net devices under 0000:84:00.0: cvl_0_0 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:20.685 Found net devices under 0000:84:00.1: cvl_0_1 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:27:20.685 00:27:20.685 --- 10.0.0.2 ping statistics --- 00:27:20.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.685 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:20.685 00:27:20.685 --- 10.0.0.1 ping statistics --- 00:27:20.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.685 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1753904 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1753904 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1753904 ']' 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.685 19:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9dd804750b058e5ce6761b6742a5c208 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Vso 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9dd804750b058e5ce6761b6742a5c208 0 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9dd804750b058e5ce6761b6742a5c208 0 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9dd804750b058e5ce6761b6742a5c208 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Vso 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Vso 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Vso 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=53dd991cd4845b76b1b5a2119d209ecab5b948ed0ec3a7147598798bf49dd85c 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.22s 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 53dd991cd4845b76b1b5a2119d209ecab5b948ed0ec3a7147598798bf49dd85c 3 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 53dd991cd4845b76b1b5a2119d209ecab5b948ed0ec3a7147598798bf49dd85c 3 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=53dd991cd4845b76b1b5a2119d209ecab5b948ed0ec3a7147598798bf49dd85c 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.22s 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.22s 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.22s 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=128aadeb6e4a172729743dd17631ea6c9fc9b31a4fb77896 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.o2y 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 128aadeb6e4a172729743dd17631ea6c9fc9b31a4fb77896 0 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 128aadeb6e4a172729743dd17631ea6c9fc9b31a4fb77896 0 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=128aadeb6e4a172729743dd17631ea6c9fc9b31a4fb77896 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:20.944 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.o2y 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.o2y 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.o2y 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fe6da9267b9a894d8619d3ad4b486a3e36b6a687349a65a6 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.7Au 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fe6da9267b9a894d8619d3ad4b486a3e36b6a687349a65a6 2 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fe6da9267b9a894d8619d3ad4b486a3e36b6a687349a65a6 2 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fe6da9267b9a894d8619d3ad4b486a3e36b6a687349a65a6 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.7Au 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.7Au 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7Au 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a0f39b88b3baff7a9b50c019d8747acf 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.l58 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a0f39b88b3baff7a9b50c019d8747acf 1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a0f39b88b3baff7a9b50c019d8747acf 1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a0f39b88b3baff7a9b50c019d8747acf 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.l58 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.l58 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.l58 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f373f13a2c6a07af3730a7ba1b5814c 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Px6 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f373f13a2c6a07af3730a7ba1b5814c 1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f373f13a2c6a07af3730a7ba1b5814c 1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f373f13a2c6a07af3730a7ba1b5814c 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:21.203 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Px6 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Px6 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Px6 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d2b7050fa5efcd1a7956f01f1efd9a1989dbf0d23db0372 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VVx 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d2b7050fa5efcd1a7956f01f1efd9a1989dbf0d23db0372 2 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d2b7050fa5efcd1a7956f01f1efd9a1989dbf0d23db0372 2 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d2b7050fa5efcd1a7956f01f1efd9a1989dbf0d23db0372 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:21.461 19:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VVx 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VVx 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.VVx 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5ae3eec1794408667e530a2311fba4cd 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GZR 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5ae3eec1794408667e530a2311fba4cd 0 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5ae3eec1794408667e530a2311fba4cd 0 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5ae3eec1794408667e530a2311fba4cd 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:21.461 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GZR 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GZR 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GZR 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=357d590d2963f2a7b159dafac3eea22c8baf45bc305242e185042bdd847ada7a 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Gs4 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 357d590d2963f2a7b159dafac3eea22c8baf45bc305242e185042bdd847ada7a 3 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 357d590d2963f2a7b159dafac3eea22c8baf45bc305242e185042bdd847ada7a 3 00:27:21.719 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=357d590d2963f2a7b159dafac3eea22c8baf45bc305242e185042bdd847ada7a 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Gs4 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Gs4 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Gs4 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1753904 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1753904 ']' 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.720 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Vso 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.22s ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.22s 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.o2y 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7Au ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7Au 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.l58 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Px6 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Px6 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VVx 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GZR ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GZR 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Gs4 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:22.287 19:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:23.663 Waiting for block devices as requested 00:27:23.663 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:23.663 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:23.921 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:23.921 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.180 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.180 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.180 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.439 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.439 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:24.439 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:24.698 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:24.698 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:24.698 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:24.698 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:24.956 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:24.956 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:24.956 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:25.524 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:25.524 No valid GPT data, bailing 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:25.783 00:27:25.783 Discovery Log Number of Records 2, Generation counter 2 00:27:25.783 =====Discovery Log Entry 0====== 00:27:25.783 trtype: tcp 00:27:25.783 adrfam: ipv4 00:27:25.783 subtype: current discovery subsystem 00:27:25.783 treq: not specified, sq flow control disable supported 00:27:25.783 portid: 1 00:27:25.783 trsvcid: 4420 00:27:25.783 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:25.783 traddr: 10.0.0.1 00:27:25.783 eflags: none 00:27:25.783 sectype: none 00:27:25.783 =====Discovery Log Entry 1====== 00:27:25.783 trtype: tcp 00:27:25.783 adrfam: ipv4 00:27:25.783 subtype: nvme subsystem 00:27:25.783 treq: not specified, sq flow control disable supported 00:27:25.783 portid: 1 00:27:25.783 trsvcid: 4420 00:27:25.783 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:25.783 traddr: 10.0.0.1 00:27:25.783 eflags: none 00:27:25.783 sectype: none 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.783 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.042 nvme0n1 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.042 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.043 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 nvme0n1 00:27:26.301 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.301 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.301 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.301 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.301 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.301 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.302 19:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.561 nvme0n1 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.561 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.820 nvme0n1 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.820 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.821 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 nvme0n1 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.080 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 nvme0n1 00:27:27.339 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.339 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.339 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.339 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 19:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.339 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 nvme0n1 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.598 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.856 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.113 nvme0n1 00:27:28.113 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.113 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.114 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.372 nvme0n1 00:27:28.372 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.372 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.372 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.372 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.372 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.372 19:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.372 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.373 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:28.373 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.373 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.630 nvme0n1 00:27:28.630 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.630 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.630 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.630 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.630 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:28.888 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.889 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.147 nvme0n1 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.147 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.148 19:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.714 nvme0n1 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.714 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.973 nvme0n1 00:27:29.973 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.973 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.973 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.973 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.973 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.973 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.232 19:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.800 nvme0n1 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.800 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.058 nvme0n1 00:27:31.058 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.058 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.058 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.058 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.058 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.058 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:31.316 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.317 19:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.575 nvme0n1 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.575 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.576 19:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.511 nvme0n1 00:27:32.511 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.511 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.511 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.511 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.511 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.511 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.770 19:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.337 nvme0n1 00:27:33.337 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.337 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.337 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.337 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.337 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.337 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.595 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.530 nvme0n1 00:27:34.530 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.530 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.530 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.530 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.530 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.530 19:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.530 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 nvme0n1 00:27:35.495 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.495 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.495 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.495 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.495 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 19:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.495 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.496 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.496 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.496 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.431 nvme0n1 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.431 19:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.431 19:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.332 nvme0n1 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.332 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.333 19:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.235 nvme0n1 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.235 19:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.138 nvme0n1 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.138 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.139 19:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.517 nvme0n1 00:27:43.517 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.517 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.517 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.517 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.517 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.517 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.776 19:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.681 nvme0n1 00:27:45.681 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.681 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.682 nvme0n1 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.682 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.942 nvme0n1 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.942 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.201 nvme0n1 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.201 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.202 19:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.460 nvme0n1 00:27:46.460 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.460 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.460 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.461 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.720 nvme0n1 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.720 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.721 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.980 nvme0n1 00:27:46.980 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.980 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.980 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.980 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.980 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.980 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.240 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.500 nvme0n1 00:27:47.500 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.500 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.500 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.500 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.500 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.500 19:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.500 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.761 nvme0n1 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.761 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.762 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.021 nvme0n1 00:27:48.021 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.021 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.021 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.021 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.021 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.021 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.280 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.281 19:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.540 nvme0n1 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.540 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.799 nvme0n1 00:27:48.799 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.799 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.799 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.799 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.799 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.799 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.058 19:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.627 nvme0n1 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.627 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.887 nvme0n1 00:27:49.887 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.888 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.147 19:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.405 nvme0n1 00:27:50.405 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.405 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.405 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.405 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.405 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.405 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.665 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.666 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.666 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.925 nvme0n1 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.925 19:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.863 nvme0n1 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.863 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.122 19:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.057 nvme0n1 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.057 19:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.992 nvme0n1 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.992 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.993 19:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.561 nvme0n1 00:27:54.561 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.561 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.561 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.561 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.561 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.820 19:20:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.755 nvme0n1 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.755 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.756 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.756 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.756 19:20:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.658 nvme0n1 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.658 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.659 19:20:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.561 nvme0n1 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:27:59.561 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.562 19:20:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.464 nvme0n1 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.464 19:20:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.366 nvme0n1 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.366 19:20:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 nvme0n1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 nvme0n1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.268 19:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.527 nvme0n1 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.527 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.528 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.528 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.528 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.786 nvme0n1 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.786 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 nvme0n1 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:06.045 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.046 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 nvme0n1 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.304 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.305 19:20:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.563 nvme0n1 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.563 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.822 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.081 nvme0n1 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.081 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.082 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.340 nvme0n1 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.340 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.341 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.341 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.341 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.341 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.341 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.341 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.341 19:20:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.599 nvme0n1 00:28:07.600 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.600 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.600 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.600 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.600 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.600 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.862 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.133 nvme0n1 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.133 19:20:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.751 nvme0n1 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.751 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.752 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.010 nvme0n1 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.010 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.269 19:20:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.527 nvme0n1 00:28:09.527 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.527 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.527 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.527 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.527 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.527 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:09.786 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.787 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.354 nvme0n1 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.354 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.355 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.355 19:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.613 nvme0n1 00:28:10.613 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.613 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.613 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.613 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.613 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.613 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.872 19:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.807 nvme0n1 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.807 19:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.743 nvme0n1 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.743 19:20:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.679 nvme0n1 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.679 19:20:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.615 nvme0n1 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.615 19:20:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.550 nvme0n1 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.550 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWRkODA0NzUwYjA1OGU1Y2U2NzYxYjY3NDJhNWMyMDjvS7iT: 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: ]] 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTNkZDk5MWNkNDg0NWI3NmIxYjVhMjExOWQyMDllY2FiNWI5NDhlZDBlYzNhNzE0NzU5ODc5OGJmNDlkZDg1Y4pFxrs=: 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.551 19:20:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.453 nvme0n1 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.453 19:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.827 nvme0n1 00:28:18.827 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.827 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.827 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.827 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.827 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.827 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTBmMzliODhiM2JhZmY3YTliNTBjMDE5ZDg3NDdhY2aYjL2S: 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2YzNzNmMTNhMmM2YTA3YWYzNzMwYTdiYTFiNTgxNGNF+tdN: 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.086 19:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.988 nvme0n1 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQyYjcwNTBmYTVlZmNkMWE3OTU2ZjAxZjFlZmQ5YTE5ODlkYmYwZDIzZGIwMzcyGqb7Rw==: 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWFlM2VlYzE3OTQ0MDg2NjdlNTMwYTIzMTFmYmE0Y2QDxr22: 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.988 19:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.364 nvme0n1 00:28:22.364 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.364 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.364 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.364 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.364 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.364 19:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.364 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.364 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.364 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.364 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzU3ZDU5MGQyOTYzZjJhN2IxNTlkYWZhYzNlZWEyMmM4YmFmNDViYzMwNTI0MmUxODUwNDJiZGQ4NDdhZGE3YQzNqS4=: 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.622 19:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.523 nvme0n1 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.523 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTI4YWFkZWI2ZTRhMTcyNzI5NzQzZGQxNzYzMWVhNmM5ZmM5YjMxYTRmYjc3ODk2IgLwYw==: 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmU2ZGE5MjY3YjlhODk0ZDg2MTlkM2FkNGI0ODZhM2UzNmI2YTY4NzM0OWE2NWE2XgUYqQ==: 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.524 request: 00:28:24.524 { 00:28:24.524 "name": "nvme0", 00:28:24.524 "trtype": "tcp", 00:28:24.524 "traddr": "10.0.0.1", 00:28:24.524 "adrfam": "ipv4", 00:28:24.524 "trsvcid": "4420", 00:28:24.524 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:24.524 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:24.524 "prchk_reftag": false, 00:28:24.524 "prchk_guard": false, 00:28:24.524 "hdgst": false, 00:28:24.524 "ddgst": false, 00:28:24.524 "method": "bdev_nvme_attach_controller", 00:28:24.524 "req_id": 1 00:28:24.524 } 00:28:24.524 Got JSON-RPC error response 00:28:24.524 response: 00:28:24.524 { 00:28:24.524 "code": -5, 00:28:24.524 "message": "Input/output error" 00:28:24.524 } 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.524 19:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.524 request: 00:28:24.524 { 00:28:24.524 "name": "nvme0", 00:28:24.524 "trtype": "tcp", 00:28:24.524 "traddr": "10.0.0.1", 00:28:24.524 "adrfam": "ipv4", 00:28:24.524 "trsvcid": "4420", 00:28:24.524 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:24.524 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:24.524 "prchk_reftag": false, 00:28:24.524 "prchk_guard": false, 00:28:24.524 "hdgst": false, 00:28:24.524 "ddgst": false, 00:28:24.524 "dhchap_key": "key2", 00:28:24.524 "method": "bdev_nvme_attach_controller", 00:28:24.524 "req_id": 1 00:28:24.524 } 00:28:24.524 Got JSON-RPC error response 00:28:24.524 response: 00:28:24.524 { 00:28:24.524 "code": -5, 00:28:24.524 "message": "Input/output error" 00:28:24.524 } 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:24.524 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.525 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.783 request: 00:28:24.783 { 00:28:24.783 "name": "nvme0", 00:28:24.783 "trtype": "tcp", 00:28:24.783 "traddr": "10.0.0.1", 00:28:24.783 "adrfam": "ipv4", 00:28:24.783 "trsvcid": "4420", 00:28:24.783 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:24.783 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:24.783 "prchk_reftag": false, 00:28:24.783 "prchk_guard": false, 00:28:24.783 "hdgst": false, 00:28:24.783 "ddgst": false, 00:28:24.783 "dhchap_key": "key1", 00:28:24.783 "dhchap_ctrlr_key": "ckey2", 00:28:24.783 "method": "bdev_nvme_attach_controller", 00:28:24.783 "req_id": 1 00:28:24.783 } 00:28:24.783 Got JSON-RPC error response 00:28:24.783 response: 00:28:24.783 { 00:28:24.783 "code": -5, 00:28:24.783 "message": "Input/output error" 00:28:24.783 } 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:24.783 rmmod nvme_tcp 00:28:24.783 rmmod nvme_fabrics 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1753904 ']' 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1753904 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1753904 ']' 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1753904 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1753904 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1753904' 00:28:24.783 killing process with pid 1753904 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1753904 00:28:24.783 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1753904 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.048 19:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:27.580 19:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:28.954 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:28.954 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:28.954 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:28.954 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:28.954 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:28.954 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:28.954 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:28.954 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:29.211 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:29.211 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:29.211 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:29.211 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:29.211 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:29.212 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:29.212 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:29.212 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:30.148 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:30.148 19:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Vso /tmp/spdk.key-null.o2y /tmp/spdk.key-sha256.l58 /tmp/spdk.key-sha384.VVx /tmp/spdk.key-sha512.Gs4 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:30.148 19:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:31.522 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:31.522 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:31.522 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:31.523 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:31.523 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:31.523 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:31.523 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:31.523 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:31.523 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:31.523 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:31.523 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:31.523 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:31.523 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:31.523 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:31.523 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:31.523 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:31.523 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:31.782 00:28:31.782 real 1m14.485s 00:28:31.782 user 1m12.713s 00:28:31.782 sys 0m8.337s 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 ************************************ 00:28:31.782 END TEST nvmf_auth_host 00:28:31.782 ************************************ 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.782 ************************************ 00:28:31.782 START TEST nvmf_digest 00:28:31.782 ************************************ 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:31.782 * Looking for test storage... 00:28:31.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.782 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:32.040 19:20:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:34.573 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.573 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:34.574 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:34.574 Found net devices under 0000:84:00.0: cvl_0_0 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:34.574 Found net devices under 0000:84:00.1: cvl_0_1 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:34.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:28:34.574 00:28:34.574 --- 10.0.0.2 ping statistics --- 00:28:34.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.574 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:28:34.574 00:28:34.574 --- 10.0.0.1 ping statistics --- 00:28:34.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.574 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.574 ************************************ 00:28:34.574 START TEST nvmf_digest_clean 00:28:34.574 ************************************ 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1765805 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1765805 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1765805 ']' 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.574 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:34.833 [2024-07-24 19:20:40.315818] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:28:34.833 [2024-07-24 19:20:40.315924] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.833 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.833 [2024-07-24 19:20:40.432586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.092 [2024-07-24 19:20:40.608359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:35.092 [2024-07-24 19:20:40.608512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:35.092 [2024-07-24 19:20:40.608538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:35.092 [2024-07-24 19:20:40.608555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:35.092 [2024-07-24 19:20:40.608570] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:35.092 [2024-07-24 19:20:40.608611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:35.092 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:35.093 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:35.093 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.093 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.352 null0 00:28:35.352 [2024-07-24 19:20:40.923721] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:35.352 [2024-07-24 19:20:40.948157] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1765954 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1765954 /var/tmp/bperf.sock 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1765954 ']' 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:35.352 19:20:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:35.352 [2024-07-24 19:20:41.003313] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:28:35.352 [2024-07-24 19:20:41.003404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765954 ] 00:28:35.352 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.611 [2024-07-24 19:20:41.081289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.611 [2024-07-24 19:20:41.225044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.870 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:35.870 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:35.870 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:35.870 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:35.870 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.437 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.437 19:20:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:37.003 nvme0n1 00:28:37.003 19:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:37.003 19:20:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:37.003 Running I/O for 2 seconds... 00:28:39.529 00:28:39.529 Latency(us) 00:28:39.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.529 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:39.529 nvme0n1 : 2.00 14910.71 58.24 0.00 0.00 8570.78 4708.88 19709.35 00:28:39.529 =================================================================================================================== 00:28:39.529 Total : 14910.71 58.24 0.00 0.00 8570.78 4708.88 19709.35 00:28:39.529 0 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:39.530 | select(.opcode=="crc32c") 00:28:39.530 | "\(.module_name) \(.executed)"' 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1765954 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1765954 ']' 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1765954 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1765954 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1765954' 00:28:39.530 killing process with pid 1765954 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1765954 00:28:39.530 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.530 00:28:39.530 Latency(us) 00:28:39.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.530 =================================================================================================================== 00:28:39.530 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.530 19:20:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1765954 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1766483 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1766483 /var/tmp/bperf.sock 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1766483 ']' 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.788 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:39.788 [2024-07-24 19:20:45.348238] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:28:39.788 [2024-07-24 19:20:45.348333] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766483 ] 00:28:39.788 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:39.788 Zero copy mechanism will not be used. 00:28:39.788 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.788 [2024-07-24 19:20:45.421624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.047 [2024-07-24 19:20:45.565366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.047 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:40.047 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:40.047 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:40.047 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:40.047 19:20:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.614 19:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.614 19:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.872 nvme0n1 00:28:40.872 19:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:40.872 19:20:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.130 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.130 Zero copy mechanism will not be used. 00:28:41.130 Running I/O for 2 seconds... 00:28:43.032 00:28:43.032 Latency(us) 00:28:43.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.032 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:43.032 nvme0n1 : 2.00 2677.38 334.67 0.00 0.00 5970.83 1128.68 8204.14 00:28:43.032 =================================================================================================================== 00:28:43.032 Total : 2677.38 334.67 0.00 0.00 5970.83 1128.68 8204.14 00:28:43.032 0 00:28:43.032 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:43.032 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:43.032 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:43.032 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:43.032 19:20:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:43.032 | select(.opcode=="crc32c") 00:28:43.032 | "\(.module_name) \(.executed)"' 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1766483 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1766483 ']' 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1766483 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1766483 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1766483' 00:28:43.611 killing process with pid 1766483 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1766483 00:28:43.611 Received shutdown signal, test time was about 2.000000 seconds 00:28:43.611 00:28:43.611 Latency(us) 00:28:43.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.611 =================================================================================================================== 00:28:43.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:43.611 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1766483 00:28:43.890 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1766893 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1766893 /var/tmp/bperf.sock 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1766893 ']' 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:43.891 19:20:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:43.891 [2024-07-24 19:20:49.570138] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:28:43.891 [2024-07-24 19:20:49.570236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1766893 ] 00:28:44.149 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.149 [2024-07-24 19:20:49.642660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.149 [2024-07-24 19:20:49.783455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.083 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:45.083 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:45.083 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:45.083 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:45.083 19:20:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:45.341 19:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:45.341 19:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.273 nvme0n1 00:28:46.273 19:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:46.273 19:20:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:46.273 Running I/O for 2 seconds... 00:28:48.171 00:28:48.171 Latency(us) 00:28:48.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.171 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:48.171 nvme0n1 : 2.01 16418.81 64.14 0.00 0.00 7781.20 3713.71 12621.75 00:28:48.171 =================================================================================================================== 00:28:48.171 Total : 16418.81 64.14 0.00 0.00 7781.20 3713.71 12621.75 00:28:48.171 0 00:28:48.171 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:48.171 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:48.171 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:48.171 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:48.171 | select(.opcode=="crc32c") 00:28:48.171 | "\(.module_name) \(.executed)"' 00:28:48.171 19:20:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:48.735 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:48.735 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:48.735 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1766893 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1766893 ']' 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1766893 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1766893 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1766893' 00:28:48.736 killing process with pid 1766893 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1766893 00:28:48.736 Received shutdown signal, test time was about 2.000000 seconds 00:28:48.736 00:28:48.736 Latency(us) 00:28:48.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.736 =================================================================================================================== 00:28:48.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.736 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1766893 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1767553 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1767553 /var/tmp/bperf.sock 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1767553 ']' 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:48.994 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.994 [2024-07-24 19:20:54.632328] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:28:48.994 [2024-07-24 19:20:54.632424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1767553 ] 00:28:48.994 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:48.994 Zero copy mechanism will not be used. 00:28:48.994 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.252 [2024-07-24 19:20:54.709463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.252 [2024-07-24 19:20:54.848126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.252 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:49.252 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:49.252 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:49.252 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:49.252 19:20:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:50.186 19:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.186 19:20:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.753 nvme0n1 00:28:50.753 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:50.753 19:20:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.011 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.011 Zero copy mechanism will not be used. 00:28:51.011 Running I/O for 2 seconds... 00:28:52.913 00:28:52.913 Latency(us) 00:28:52.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.913 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:52.913 nvme0n1 : 2.00 2868.67 358.58 0.00 0.00 5564.09 3446.71 7718.68 00:28:52.913 =================================================================================================================== 00:28:52.913 Total : 2868.67 358.58 0.00 0.00 5564.09 3446.71 7718.68 00:28:52.913 0 00:28:52.913 19:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:52.913 19:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:52.913 19:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:52.913 19:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:52.913 19:20:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:52.913 | select(.opcode=="crc32c") 00:28:52.913 | "\(.module_name) \(.executed)"' 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1767553 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1767553 ']' 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1767553 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767553 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767553' 00:28:53.480 killing process with pid 1767553 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1767553 00:28:53.480 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.480 00:28:53.480 Latency(us) 00:28:53.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.480 =================================================================================================================== 00:28:53.480 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.480 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1767553 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1765805 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1765805 ']' 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1765805 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1765805 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1765805' 00:28:53.738 killing process with pid 1765805 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1765805 00:28:53.738 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1765805 00:28:54.306 00:28:54.306 real 0m19.539s 00:28:54.306 user 0m40.122s 00:28:54.306 sys 0m5.165s 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.306 ************************************ 00:28:54.306 END TEST nvmf_digest_clean 00:28:54.306 ************************************ 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:54.306 ************************************ 00:28:54.306 START TEST nvmf_digest_error 00:28:54.306 ************************************ 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1768118 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1768118 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1768118 ']' 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.306 19:20:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.306 [2024-07-24 19:20:59.962256] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:28:54.306 [2024-07-24 19:20:59.962424] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:54.565 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.565 [2024-07-24 19:21:00.107971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.824 [2024-07-24 19:21:00.296347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:54.824 [2024-07-24 19:21:00.296480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:54.824 [2024-07-24 19:21:00.296527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:54.824 [2024-07-24 19:21:00.296544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:54.824 [2024-07-24 19:21:00.296558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:54.824 [2024-07-24 19:21:00.296603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.824 [2024-07-24 19:21:00.469880] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.824 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.083 null0 00:28:55.083 [2024-07-24 19:21:00.650035] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.083 [2024-07-24 19:21:00.674450] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1768304 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1768304 /var/tmp/bperf.sock 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1768304 ']' 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.083 19:21:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.083 [2024-07-24 19:21:00.749019] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:28:55.083 [2024-07-24 19:21:00.749135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768304 ] 00:28:55.342 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.342 [2024-07-24 19:21:00.840401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.342 [2024-07-24 19:21:00.980445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.278 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.278 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:56.278 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.278 19:21:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:56.536 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:56.536 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:56.536 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:56.536 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:56.536 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:56.536 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.101 nvme0n1 00:28:57.101 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:57.101 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.101 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:57.101 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.101 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.101 19:21:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.101 Running I/O for 2 seconds... 00:28:57.101 [2024-07-24 19:21:02.709866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.101 [2024-07-24 19:21:02.709935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.101 [2024-07-24 19:21:02.709968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.101 [2024-07-24 19:21:02.731402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.101 [2024-07-24 19:21:02.731456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.101 [2024-07-24 19:21:02.731492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.101 [2024-07-24 19:21:02.751103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.101 [2024-07-24 19:21:02.751147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.101 [2024-07-24 19:21:02.751171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.101 [2024-07-24 19:21:02.767286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.101 [2024-07-24 19:21:02.767330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.101 [2024-07-24 19:21:02.767354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.101 [2024-07-24 19:21:02.786085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.101 [2024-07-24 19:21:02.786129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.101 [2024-07-24 19:21:02.786153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.800762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.800805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.800839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.817616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.817659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.817683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.833827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.833877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.833901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.851531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.851573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.851597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.868602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.868653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.868677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.888064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.888108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.888131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.902506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.902548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.902572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.921979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.922021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.922046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.937549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.937591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.937615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.957896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.957950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.957974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.977868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.977922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.977956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:02.992996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.360 [2024-07-24 19:21:02.993041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.360 [2024-07-24 19:21:02.993066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.360 [2024-07-24 19:21:03.015956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.361 [2024-07-24 19:21:03.016001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.361 [2024-07-24 19:21:03.016025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.361 [2024-07-24 19:21:03.033290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.361 [2024-07-24 19:21:03.033333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.361 [2024-07-24 19:21:03.033357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.361 [2024-07-24 19:21:03.048703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.361 [2024-07-24 19:21:03.048746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.361 [2024-07-24 19:21:03.048769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.066586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.066629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.066653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.086217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.086262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.086296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.102844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.102887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.102912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.118210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.118253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.118277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.135425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.135489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.135514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.150632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.150673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.150698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.174105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.174149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.174178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.192345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.192388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.192413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.211466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.211507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.211531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.225424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.225482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.225506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.243938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.243980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.244004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.263208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.263251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.263276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.278102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.278144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.278168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.294692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.294734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.294758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.620 [2024-07-24 19:21:03.311132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.620 [2024-07-24 19:21:03.311182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.620 [2024-07-24 19:21:03.311206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.878 [2024-07-24 19:21:03.328316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.878 [2024-07-24 19:21:03.328357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.878 [2024-07-24 19:21:03.328381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.878 [2024-07-24 19:21:03.346309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.878 [2024-07-24 19:21:03.346363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.878 [2024-07-24 19:21:03.346386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.878 [2024-07-24 19:21:03.362261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.878 [2024-07-24 19:21:03.362303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.878 [2024-07-24 19:21:03.362326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.878 [2024-07-24 19:21:03.378608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.878 [2024-07-24 19:21:03.378649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.878 [2024-07-24 19:21:03.378673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.878 [2024-07-24 19:21:03.395466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.878 [2024-07-24 19:21:03.395507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.878 [2024-07-24 19:21:03.395532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.878 [2024-07-24 19:21:03.411631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.878 [2024-07-24 19:21:03.411674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.878 [2024-07-24 19:21:03.411698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.878 [2024-07-24 19:21:03.427617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.878 [2024-07-24 19:21:03.427659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.878 [2024-07-24 19:21:03.427693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.447165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.447219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.447254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.462218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.462261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.462285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.484075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.484117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.484141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.499308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.499350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.499373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.517377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.517438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.517464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.533986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.534028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.534058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.550571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.550614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.550637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.879 [2024-07-24 19:21:03.569144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:57.879 [2024-07-24 19:21:03.569188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.879 [2024-07-24 19:21:03.569211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.585420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.585488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.585514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.599960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.600004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.600027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.619457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.619510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.619534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.639387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.639440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.639466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.655038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.655081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.655104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.675070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.675113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.675137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.690392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.690445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.690472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.711136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.711178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.711202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.732663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.732717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.732741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.753190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.753232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.753257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.772154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.772197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.772221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.788690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.788733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.788757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.807245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.807286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.807311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.138 [2024-07-24 19:21:03.825140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.138 [2024-07-24 19:21:03.825193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.138 [2024-07-24 19:21:03.825216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.840870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.840912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.840937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.859330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.859372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.859395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.874730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.874772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.874795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.892139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.892188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.892213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.911153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.911194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.911216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.925126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.925168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.925191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.944517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.944559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.944582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.960389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.960439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.960466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.981921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.981999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:03.999273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:03.999314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:03.999337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:04.018028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:04.018071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:04.018095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:04.035379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:04.035422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:04.035460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:04.049234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:04.049276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:04.049300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:04.070723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:04.070765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:04.070788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.397 [2024-07-24 19:21:04.089312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.397 [2024-07-24 19:21:04.089354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.397 [2024-07-24 19:21:04.089382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.655 [2024-07-24 19:21:04.109488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.655 [2024-07-24 19:21:04.109529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-07-24 19:21:04.109553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.655 [2024-07-24 19:21:04.126693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.655 [2024-07-24 19:21:04.126734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-07-24 19:21:04.126759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.655 [2024-07-24 19:21:04.140743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.655 [2024-07-24 19:21:04.140786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-07-24 19:21:04.140809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.655 [2024-07-24 19:21:04.159987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.655 [2024-07-24 19:21:04.160029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.655 [2024-07-24 19:21:04.160053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.655 [2024-07-24 19:21:04.182308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.655 [2024-07-24 19:21:04.182351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.182375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.199203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.199245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.199276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.214822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.214864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.214887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.230955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.230998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.231022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.251676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.251719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.251743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.269216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.269267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.269292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.284505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.284547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.284571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.303555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.303598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.303622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.318584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.318632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.318655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.656 [2024-07-24 19:21:04.339179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.656 [2024-07-24 19:21:04.339222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.656 [2024-07-24 19:21:04.339246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.355676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.355725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.355751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.370806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.370848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.370874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.391034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.391076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.391100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.405336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.405377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.405401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.425961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.426004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.426029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.447207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.447249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.447273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.461698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.461740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.461764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.482086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.482128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.482152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.501350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.501393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.501417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.516348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.516391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.516415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.535703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.535746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.535770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.551748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.551789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.551813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.567937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.914 [2024-07-24 19:21:04.567978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.914 [2024-07-24 19:21:04.568002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.914 [2024-07-24 19:21:04.582451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.915 [2024-07-24 19:21:04.582492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.915 [2024-07-24 19:21:04.582516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.915 [2024-07-24 19:21:04.601192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:58.915 [2024-07-24 19:21:04.601234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.915 [2024-07-24 19:21:04.601257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.176 [2024-07-24 19:21:04.615847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:59.176 [2024-07-24 19:21:04.615891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.176 [2024-07-24 19:21:04.615915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.176 [2024-07-24 19:21:04.634419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:59.176 [2024-07-24 19:21:04.634481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.176 [2024-07-24 19:21:04.634506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.176 [2024-07-24 19:21:04.655742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:59.176 [2024-07-24 19:21:04.655784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.176 [2024-07-24 19:21:04.655815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.176 [2024-07-24 19:21:04.674899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:59.176 [2024-07-24 19:21:04.674943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.176 [2024-07-24 19:21:04.674967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.176 [2024-07-24 19:21:04.689330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b782f0) 00:28:59.176 [2024-07-24 19:21:04.689377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.176 [2024-07-24 19:21:04.689409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.176 00:28:59.176 Latency(us) 00:28:59.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.176 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:59.176 nvme0n1 : 2.00 14391.79 56.22 0.00 0.00 8879.48 4781.70 32039.82 00:28:59.176 =================================================================================================================== 00:28:59.176 Total : 14391.79 56.22 0.00 0.00 8879.48 4781.70 32039.82 00:28:59.176 0 00:28:59.176 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:59.176 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:59.176 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:59.176 | .driver_specific 00:28:59.176 | .nvme_error 00:28:59.176 | .status_code 00:28:59.176 | .command_transient_transport_error' 00:28:59.176 19:21:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 113 > 0 )) 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1768304 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1768304 ']' 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1768304 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768304 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768304' 00:28:59.774 killing process with pid 1768304 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1768304 00:28:59.774 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.774 00:28:59.774 Latency(us) 00:28:59.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.774 =================================================================================================================== 00:28:59.774 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.774 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1768304 00:29:00.031 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:00.031 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:00.031 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:00.031 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:00.031 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1768920 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1768920 /var/tmp/bperf.sock 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1768920 ']' 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.032 19:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.032 [2024-07-24 19:21:05.663557] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:00.032 [2024-07-24 19:21:05.663653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768920 ] 00:29:00.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.032 Zero copy mechanism will not be used. 00:29:00.032 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.289 [2024-07-24 19:21:05.740921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.289 [2024-07-24 19:21:05.882257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.547 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:00.547 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:00.547 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.547 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:00.805 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:00.805 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.805 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:00.805 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.805 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:00.805 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.371 nvme0n1 00:29:01.371 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:01.371 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:01.371 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:01.371 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:01.371 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.371 19:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.630 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.630 Zero copy mechanism will not be used. 00:29:01.630 Running I/O for 2 seconds... 00:29:01.630 [2024-07-24 19:21:07.133223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.133289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.133316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.142174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.142219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.142243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.151303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.151347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.151371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.160513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.160555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.160579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.169525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.169566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.169590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.178166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.178210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.178234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.186770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.186814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.186838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.195361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.195403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.195440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.204016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.204058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.204082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.212912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.212954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.212977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.222010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.222053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.222077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.231185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.231226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.231249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.241158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.241200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.241224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.250890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.250934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.250959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.260806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.260849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.260880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.270697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.270741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.270764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.280047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.280089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.280112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.289931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.630 [2024-07-24 19:21:07.289974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.630 [2024-07-24 19:21:07.289998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.630 [2024-07-24 19:21:07.299163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.631 [2024-07-24 19:21:07.299205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.631 [2024-07-24 19:21:07.299228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.631 [2024-07-24 19:21:07.309021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.631 [2024-07-24 19:21:07.309064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.631 [2024-07-24 19:21:07.309087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.631 [2024-07-24 19:21:07.318408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.631 [2024-07-24 19:21:07.318460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.631 [2024-07-24 19:21:07.318486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.327760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.327801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.327825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.337031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.337073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.337097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.346367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.346414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.346451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.355736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.355777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.355800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.364981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.365022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.365046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.374261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.374301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.374324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.383464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.383505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.383529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.393239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.393305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.402654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.402694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.402718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.412095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.412138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.412162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.421481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.421522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.421552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.430728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.430778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.430801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.440382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.440425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.440463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.449838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.449879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.449902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.459305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.459346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.459369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.470094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.890 [2024-07-24 19:21:07.470138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.890 [2024-07-24 19:21:07.470164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.890 [2024-07-24 19:21:07.481376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.481419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.481452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.491110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.491153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.491178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.501856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.501900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.501925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.512940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.512990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.513016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.523886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.523930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.523954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.534198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.534241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.534265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.545191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.545235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.545259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.556543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.556587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.556611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.567654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.567698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.567722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.891 [2024-07-24 19:21:07.577689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:01.891 [2024-07-24 19:21:07.577733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.891 [2024-07-24 19:21:07.577757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.587533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.587575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.587598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.597663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.597710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.597733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.607143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.607185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.607209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.616489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.616529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.616552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.626656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.626698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.626722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.636305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.636347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.636370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.645798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.645841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.645864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.655743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.655786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.655811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.666044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.666088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.666111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.677207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.677250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.677274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.687391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.150 [2024-07-24 19:21:07.687441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.150 [2024-07-24 19:21:07.687475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.150 [2024-07-24 19:21:07.698516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.698556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.698578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.710515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.710554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.710576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.722831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.722873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.722896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.735011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.735050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.735072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.747451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.747516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.747539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.759571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.759612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.759635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.771740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.771782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.783773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.783814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.783837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.795660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.795725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.795750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.807826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.807867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.807891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.817492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.817532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.817554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.826164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.826205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.826229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.834821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.834862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.834885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.151 [2024-07-24 19:21:07.843629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.151 [2024-07-24 19:21:07.843669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.151 [2024-07-24 19:21:07.843691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.852225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.852266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.852290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.860942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.860983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.861006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.870631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.870671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.870718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.879908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.879950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.879975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.889597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.889664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.898683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.898726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.898749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.907187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.907228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.907253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.915818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.915859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.915882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.924424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.924473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.924497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.932985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.933026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.933050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.942752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.942794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.942817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.951389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.410 [2024-07-24 19:21:07.951447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.410 [2024-07-24 19:21:07.951473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.410 [2024-07-24 19:21:07.959220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:07.959263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:07.959287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:07.967713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:07.967753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:07.967776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:07.976202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:07.976242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:07.976265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:07.984841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:07.984881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:07.984904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:07.993574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:07.993614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:07.993637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.002249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.002290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.002313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.010933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.010974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.010997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.019396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.019446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.019472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.027861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.027902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.027925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.036284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.036325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.036349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.044840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.044880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.044903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.053409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.053457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.053481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.061994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.062034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.062057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.070677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.070718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.070741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.079336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.079399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.087973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.088014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.088036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.096607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.096647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.096678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.411 [2024-07-24 19:21:08.105420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.411 [2024-07-24 19:21:08.105486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.411 [2024-07-24 19:21:08.105509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.670 [2024-07-24 19:21:08.114104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.670 [2024-07-24 19:21:08.114144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.670 [2024-07-24 19:21:08.114167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.670 [2024-07-24 19:21:08.119710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.670 [2024-07-24 19:21:08.119750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.670 [2024-07-24 19:21:08.119773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.670 [2024-07-24 19:21:08.126889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.670 [2024-07-24 19:21:08.126929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.670 [2024-07-24 19:21:08.126952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.670 [2024-07-24 19:21:08.135408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.670 [2024-07-24 19:21:08.135457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.670 [2024-07-24 19:21:08.135482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.670 [2024-07-24 19:21:08.143964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.670 [2024-07-24 19:21:08.144005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.670 [2024-07-24 19:21:08.144028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.670 [2024-07-24 19:21:08.152538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.670 [2024-07-24 19:21:08.152579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.152603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.161491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.161532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.161555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.170585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.170634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.170660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.179350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.179392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.179415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.187915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.187956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.187979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.196512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.196553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.196577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.205007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.205049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.205071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.213603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.213642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.213663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.222348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.222390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.222413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.230965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.231005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.231028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.239646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.239701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.239725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.248414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.248485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.248510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.257236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.257277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.257302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.265935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.265977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.265999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.274691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.274730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.274752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.283367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.283408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.283442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.291992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.292031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.292054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.301579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.301622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.301645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.310591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.310631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.310655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.319149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.319188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.319221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.327917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.327961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.327985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.336727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.336771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.336795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.345341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.345384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.345407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.354178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.354222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.354246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.671 [2024-07-24 19:21:08.363199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.671 [2024-07-24 19:21:08.363240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.671 [2024-07-24 19:21:08.363263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.371750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.371791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.371814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.380198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.380238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.380261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.388562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.388600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.388622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.397001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.397050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.397073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.405476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.405517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.405539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.414078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.414120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.414143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.422717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.422759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.422782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.431495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.431536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.431559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.440065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.440106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.440129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.448751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.448792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.448816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.458185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.458227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.458250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.467059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.467103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.467133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.475795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.475836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.475858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.483999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.484042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.484066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.492398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.492446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.492471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.500948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.501011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.509882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.509923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.509947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.518765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.931 [2024-07-24 19:21:08.518806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.931 [2024-07-24 19:21:08.518829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.931 [2024-07-24 19:21:08.527696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.527738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.527763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.536412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.536467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.536492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.545160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.545214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.545239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.554009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.554051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.554074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.562926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.562969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.562992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.571637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.571692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.571716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.580270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.580310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.580333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.589040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.589081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.589103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.597719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.597760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.597784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.606403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.606453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.606478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.614910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.614950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.614973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.932 [2024-07-24 19:21:08.623466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:02.932 [2024-07-24 19:21:08.623523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.932 [2024-07-24 19:21:08.623547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.191 [2024-07-24 19:21:08.632171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.191 [2024-07-24 19:21:08.632211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.191 [2024-07-24 19:21:08.632235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.191 [2024-07-24 19:21:08.640613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.191 [2024-07-24 19:21:08.640653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.191 [2024-07-24 19:21:08.640676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.191 [2024-07-24 19:21:08.649514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.191 [2024-07-24 19:21:08.649556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.191 [2024-07-24 19:21:08.649579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.658698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.658740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.658764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.667520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.667561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.667585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.676573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.676615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.676639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.685702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.685745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.685768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.694341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.694382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.694412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.702990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.703030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.703053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.711687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.711728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.711752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.720314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.720354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.720377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.729694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.729735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.729769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.739538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.739580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.739603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.749151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.749192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.749215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.758591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.758632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.758655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.768629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.768683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.768707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.777995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.778044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.778069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.787819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.787862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.787886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.797872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.797914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.797939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.807735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.807778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.807801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.817194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.817236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.817259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.826507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.826548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.826573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.835602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.835643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.835667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.844207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.844248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.844271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.852727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.852768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.852792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.861195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.861236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.861259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.869790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.869831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.869855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.192 [2024-07-24 19:21:08.878501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.192 [2024-07-24 19:21:08.878542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.192 [2024-07-24 19:21:08.878565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.887787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.887830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.887854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.896789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.896829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.896853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.906129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.906170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.906193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.915560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.915601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.915625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.925533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.925584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.925608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.935607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.935649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.935689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.945193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.945234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.945258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.954857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.954897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.954921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.964815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.964865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.964888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.975376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.975420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.975455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.985148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.985190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.985214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:08.994578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:08.994629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:08.994652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:09.004199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:09.004240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:09.004264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:09.013622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:09.013663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:09.013686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:09.023108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:09.023155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:09.023179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:09.032564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:09.032607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:09.032631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.452 [2024-07-24 19:21:09.041971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.452 [2024-07-24 19:21:09.042013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.452 [2024-07-24 19:21:09.042037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.051355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.051395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.051419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.060692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.060733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.060757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.070023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.070064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.070087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.080012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.080055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.080078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.089411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.089465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.089489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.100141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.100184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.100215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.109635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.109678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.109702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.119015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.119056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.119080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.453 [2024-07-24 19:21:09.128312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21cae30) 00:29:03.453 [2024-07-24 19:21:09.128353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.453 [2024-07-24 19:21:09.128377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.453 00:29:03.453 Latency(us) 00:29:03.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.453 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:03.453 nvme0n1 : 2.00 3349.15 418.64 0.00 0.00 4770.34 1152.95 12427.57 00:29:03.453 =================================================================================================================== 00:29:03.453 Total : 3349.15 418.64 0.00 0.00 4770.34 1152.95 12427.57 00:29:03.453 0 00:29:03.711 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.711 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.711 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.711 | .driver_specific 00:29:03.711 | .nvme_error 00:29:03.711 | .status_code 00:29:03.711 | .command_transient_transport_error' 00:29:03.711 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1768920 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1768920 ']' 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1768920 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768920 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768920' 00:29:03.970 killing process with pid 1768920 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1768920 00:29:03.970 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.970 00:29:03.970 Latency(us) 00:29:03.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.970 =================================================================================================================== 00:29:03.970 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.970 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1768920 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1769949 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1769949 /var/tmp/bperf.sock 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1769949 ']' 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:04.229 19:21:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.229 [2024-07-24 19:21:09.888976] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:04.229 [2024-07-24 19:21:09.889149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769949 ] 00:29:04.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.488 [2024-07-24 19:21:09.995081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.488 [2024-07-24 19:21:10.138002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.746 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:04.746 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:04.746 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:04.746 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.004 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:05.004 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.004 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.004 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.004 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.005 19:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.571 nvme0n1 00:29:05.571 19:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:05.571 19:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.571 19:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.571 19:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.571 19:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:05.571 19:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.829 Running I/O for 2 seconds... 00:29:05.829 [2024-07-24 19:21:11.317219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ed920 00:29:05.829 [2024-07-24 19:21:11.318631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.318680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.333378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190eea00 00:29:05.829 [2024-07-24 19:21:11.334788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.334831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.348209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f9f68 00:29:05.829 [2024-07-24 19:21:11.349573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.349614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.365967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fd208 00:29:05.829 [2024-07-24 19:21:11.367584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.367623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.382406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190efae0 00:29:05.829 [2024-07-24 19:21:11.384188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.384228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.397470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ebfd0 00:29:05.829 [2024-07-24 19:21:11.399248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.399299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.412211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e5658 00:29:05.829 [2024-07-24 19:21:11.413387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.413443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.428135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f20d8 00:29:05.829 [2024-07-24 19:21:11.429027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.429065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.446214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fa7d8 00:29:05.829 [2024-07-24 19:21:11.448413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.829 [2024-07-24 19:21:11.448463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:05.829 [2024-07-24 19:21:11.462762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f46d0 00:29:05.829 [2024-07-24 19:21:11.465183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.830 [2024-07-24 19:21:11.465222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:05.830 [2024-07-24 19:21:11.477439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190eea00 00:29:05.830 [2024-07-24 19:21:11.479242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.830 [2024-07-24 19:21:11.479282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:05.830 [2024-07-24 19:21:11.491731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f96f8 00:29:05.830 [2024-07-24 19:21:11.494086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.830 [2024-07-24 19:21:11.494129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:05.830 [2024-07-24 19:21:11.506309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f57b0 00:29:05.830 [2024-07-24 19:21:11.507492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.830 [2024-07-24 19:21:11.507530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:05.830 [2024-07-24 19:21:11.522596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190eff18 00:29:05.830 [2024-07-24 19:21:11.523977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.830 [2024-07-24 19:21:11.524016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.088 [2024-07-24 19:21:11.537598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fe720 00:29:06.088 [2024-07-24 19:21:11.538962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.088 [2024-07-24 19:21:11.539003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:06.088 [2024-07-24 19:21:11.555172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e8d30 00:29:06.088 [2024-07-24 19:21:11.556791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.088 [2024-07-24 19:21:11.556830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.570910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ea680 00:29:06.089 [2024-07-24 19:21:11.572536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.572577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.587234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f3a28 00:29:06.089 [2024-07-24 19:21:11.589055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.589095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.603221] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f20d8 00:29:06.089 [2024-07-24 19:21:11.605044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.605082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.618903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ed920 00:29:06.089 [2024-07-24 19:21:11.620734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.620773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.635184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f8618 00:29:06.089 [2024-07-24 19:21:11.637186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.637225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.648080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e38d0 00:29:06.089 [2024-07-24 19:21:11.649240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.649279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.663812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fa7d8 00:29:06.089 [2024-07-24 19:21:11.664975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.665013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.679541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e99d8 00:29:06.089 [2024-07-24 19:21:11.680710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.680754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.694135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f0bc0 00:29:06.089 [2024-07-24 19:21:11.695288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.695327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.710613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ed0b0 00:29:06.089 [2024-07-24 19:21:11.711977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.712018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.727066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e1710 00:29:06.089 [2024-07-24 19:21:11.728574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.728614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.743505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e1b48 00:29:06.089 [2024-07-24 19:21:11.745232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.759988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f7100 00:29:06.089 [2024-07-24 19:21:11.761906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.761946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.089 [2024-07-24 19:21:11.776511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ed0b0 00:29:06.089 [2024-07-24 19:21:11.778637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.089 [2024-07-24 19:21:11.778675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.793223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190df118 00:29:06.348 [2024-07-24 19:21:11.795559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.795598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.807946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e9168 00:29:06.348 [2024-07-24 19:21:11.809659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.822316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f35f0 00:29:06.348 [2024-07-24 19:21:11.824720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.824770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.835895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f4f40 00:29:06.348 [2024-07-24 19:21:11.836982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.837024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.852469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e6300 00:29:06.348 [2024-07-24 19:21:11.853767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.853810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.869077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e2c28 00:29:06.348 [2024-07-24 19:21:11.870571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.870610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.886696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e7c50 00:29:06.348 [2024-07-24 19:21:11.888467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.888505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.903008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e4578 00:29:06.348 [2024-07-24 19:21:11.905017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.905057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.915918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fc128 00:29:06.348 [2024-07-24 19:21:11.917043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.917080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.931710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f6458 00:29:06.348 [2024-07-24 19:21:11.932793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.932832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.947829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fd208 00:29:06.348 [2024-07-24 19:21:11.949120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.949160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.962649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fda78 00:29:06.348 [2024-07-24 19:21:11.963914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.963952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.979258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ec408 00:29:06.348 [2024-07-24 19:21:11.980830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.980868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:11.995822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ebfd0 00:29:06.348 [2024-07-24 19:21:11.997516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:11.997556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:12.010596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e5a90 00:29:06.348 [2024-07-24 19:21:12.011662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:12.011701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:06.348 [2024-07-24 19:21:12.026552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f9b30 00:29:06.348 [2024-07-24 19:21:12.027388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.348 [2024-07-24 19:21:12.027426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.044917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f5378 00:29:06.608 [2024-07-24 19:21:12.047157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.047196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.059744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f7538 00:29:06.608 [2024-07-24 19:21:12.061340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.061387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.075573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f0350 00:29:06.608 [2024-07-24 19:21:12.076896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.076935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.093689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e4140 00:29:06.608 [2024-07-24 19:21:12.096273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.096312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.105023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f81e0 00:29:06.608 [2024-07-24 19:21:12.106143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.106180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.119939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e0ea0 00:29:06.608 [2024-07-24 19:21:12.121037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.121075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.137604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fa3a0 00:29:06.608 [2024-07-24 19:21:12.138957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.138995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.153967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190df118 00:29:06.608 [2024-07-24 19:21:12.155495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.155534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.169959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f0bc0 00:29:06.608 [2024-07-24 19:21:12.171517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.171555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.185700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f20d8 00:29:06.608 [2024-07-24 19:21:12.187262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.187301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.201978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fef90 00:29:06.608 [2024-07-24 19:21:12.203741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.203782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.217950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190eee38 00:29:06.608 [2024-07-24 19:21:12.219729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.219774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.234276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f3a28 00:29:06.608 [2024-07-24 19:21:12.236268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.236306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.249177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e6b70 00:29:06.608 [2024-07-24 19:21:12.251146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.251184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.263901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f81e0 00:29:06.608 [2024-07-24 19:21:12.265272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.265309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.279442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190efae0 00:29:06.608 [2024-07-24 19:21:12.280781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.280819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:06.608 [2024-07-24 19:21:12.295642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ecc78 00:29:06.608 [2024-07-24 19:21:12.296702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.608 [2024-07-24 19:21:12.296741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.313736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fdeb0 00:29:06.867 [2024-07-24 19:21:12.315920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.315958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.328836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ea680 00:29:06.867 [2024-07-24 19:21:12.330988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.331029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.343602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f2948 00:29:06.867 [2024-07-24 19:21:12.345118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.345157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.359128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fd208 00:29:06.867 [2024-07-24 19:21:12.360603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.360649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.375395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e0ea0 00:29:06.867 [2024-07-24 19:21:12.377126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.377162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.390338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f7970 00:29:06.867 [2024-07-24 19:21:12.392060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.392097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.405132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f46d0 00:29:06.867 [2024-07-24 19:21:12.406238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.420612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ebfd0 00:29:06.867 [2024-07-24 19:21:12.421720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.421760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.436306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e8d30 00:29:06.867 [2024-07-24 19:21:12.437395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.437443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.452033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190dece0 00:29:06.867 [2024-07-24 19:21:12.453107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.453148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.467716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f81e0 00:29:06.867 [2024-07-24 19:21:12.468829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.468867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.483461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e5658 00:29:06.867 [2024-07-24 19:21:12.484561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.484598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.499651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ee5c8 00:29:06.867 [2024-07-24 19:21:12.500484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.500533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.517603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ef270 00:29:06.867 [2024-07-24 19:21:12.519524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.532767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fc128 00:29:06.867 [2024-07-24 19:21:12.534676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.534714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:06.867 [2024-07-24 19:21:12.549367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f92c0 00:29:06.867 [2024-07-24 19:21:12.551483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:06.867 [2024-07-24 19:21:12.551521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.564223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e4140 00:29:07.126 [2024-07-24 19:21:12.565788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.126 [2024-07-24 19:21:12.565825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.580275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e4de8 00:29:07.126 [2024-07-24 19:21:12.581588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.126 [2024-07-24 19:21:12.581625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.598401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e01f8 00:29:07.126 [2024-07-24 19:21:12.600948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.126 [2024-07-24 19:21:12.600986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.609624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e4de8 00:29:07.126 [2024-07-24 19:21:12.610690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.126 [2024-07-24 19:21:12.610728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.624631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f3a28 00:29:07.126 [2024-07-24 19:21:12.625692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.126 [2024-07-24 19:21:12.625730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.642226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e1b48 00:29:07.126 [2024-07-24 19:21:12.643555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.126 [2024-07-24 19:21:12.643592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.656998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190eb328 00:29:07.126 [2024-07-24 19:21:12.658271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.126 [2024-07-24 19:21:12.658308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:07.126 [2024-07-24 19:21:12.674564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f9f68 00:29:07.127 [2024-07-24 19:21:12.676085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.676124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.690329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190eea00 00:29:07.127 [2024-07-24 19:21:12.691853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.691894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.706091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fbcf0 00:29:07.127 [2024-07-24 19:21:12.707617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.707656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.723840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ed4e8 00:29:07.127 [2024-07-24 19:21:12.726174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.726211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.738531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f9b30 00:29:07.127 [2024-07-24 19:21:12.740289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.740327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.752915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f6890 00:29:07.127 [2024-07-24 19:21:12.755177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.755224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.767511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190df550 00:29:07.127 [2024-07-24 19:21:12.768547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.768591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.783814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fb480 00:29:07.127 [2024-07-24 19:21:12.785094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.798743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e3060 00:29:07.127 [2024-07-24 19:21:12.799995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.800031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:07.127 [2024-07-24 19:21:12.816360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190dece0 00:29:07.127 [2024-07-24 19:21:12.817898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.127 [2024-07-24 19:21:12.817933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.832329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f4f40 00:29:07.386 [2024-07-24 19:21:12.833868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.833906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.848090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ee190 00:29:07.386 [2024-07-24 19:21:12.849622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.849658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.863832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f7da8 00:29:07.386 [2024-07-24 19:21:12.865375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.865413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.880096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ea680 00:29:07.386 [2024-07-24 19:21:12.881819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.881856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.895026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f6020 00:29:07.386 [2024-07-24 19:21:12.896641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.896679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.911571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f6cc8 00:29:07.386 [2024-07-24 19:21:12.913411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.913458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.928102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e4578 00:29:07.386 [2024-07-24 19:21:12.930222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.930271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.940719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f6cc8 00:29:07.386 [2024-07-24 19:21:12.941926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.941965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.957171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e5220 00:29:07.386 [2024-07-24 19:21:12.958576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.958615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.973787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f2948 00:29:07.386 [2024-07-24 19:21:12.975384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.975423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:12.991161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e3d08 00:29:07.386 [2024-07-24 19:21:12.992530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:12.992568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:13.006119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e4578 00:29:07.386 [2024-07-24 19:21:13.008595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:13.008636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:13.023473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e9e10 00:29:07.386 [2024-07-24 19:21:13.025524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:13.025561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:13.036786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f20d8 00:29:07.386 [2024-07-24 19:21:13.037773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:13.037810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:13.053282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f0ff8 00:29:07.386 [2024-07-24 19:21:13.054461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:13.054499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:07.386 [2024-07-24 19:21:13.069743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e7818 00:29:07.386 [2024-07-24 19:21:13.071117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.386 [2024-07-24 19:21:13.071155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.084830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190de8a8 00:29:07.645 [2024-07-24 19:21:13.087259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.087298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.098452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e5658 00:29:07.645 [2024-07-24 19:21:13.099606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.099643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.116045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f6458 00:29:07.645 [2024-07-24 19:21:13.117514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.117552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.132248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f8a50 00:29:07.645 [2024-07-24 19:21:13.133822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.133860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.148619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ebfd0 00:29:07.645 [2024-07-24 19:21:13.150418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.150463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.163450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ebfd0 00:29:07.645 [2024-07-24 19:21:13.165329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.165371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.179980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f0ff8 00:29:07.645 [2024-07-24 19:21:13.182094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.182138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.196088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f4298 00:29:07.645 [2024-07-24 19:21:13.198097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.198138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.212050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e0a68 00:29:07.645 [2024-07-24 19:21:13.214151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.645 [2024-07-24 19:21:13.214190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:07.645 [2024-07-24 19:21:13.227271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190fd208 00:29:07.646 [2024-07-24 19:21:13.229356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.646 [2024-07-24 19:21:13.229393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:07.646 [2024-07-24 19:21:13.242017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e95a0 00:29:07.646 [2024-07-24 19:21:13.243457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.646 [2024-07-24 19:21:13.243495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:07.646 [2024-07-24 19:21:13.257987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190f6020 00:29:07.646 [2024-07-24 19:21:13.259222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.646 [2024-07-24 19:21:13.259260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:07.646 [2024-07-24 19:21:13.274049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e73e0 00:29:07.646 [2024-07-24 19:21:13.275716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.646 [2024-07-24 19:21:13.275752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:07.646 [2024-07-24 19:21:13.289846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190ea680 00:29:07.646 [2024-07-24 19:21:13.291525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.646 [2024-07-24 19:21:13.291563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:07.646 [2024-07-24 19:21:13.304507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f740) with pdu=0x2000190e9168 00:29:07.646 [2024-07-24 19:21:13.306143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:07.646 [2024-07-24 19:21:13.306180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:07.646 00:29:07.646 Latency(us) 00:29:07.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.646 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.646 nvme0n1 : 2.01 16147.76 63.08 0.00 0.00 7916.77 3252.53 19903.53 00:29:07.646 =================================================================================================================== 00:29:07.646 Total : 16147.76 63.08 0.00 0.00 7916.77 3252.53 19903.53 00:29:07.646 0 00:29:07.646 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:07.646 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:07.646 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:07.646 | .driver_specific 00:29:07.646 | .nvme_error 00:29:07.646 | .status_code 00:29:07.646 | .command_transient_transport_error' 00:29:07.646 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:08.213 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 127 > 0 )) 00:29:08.213 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1769949 00:29:08.213 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1769949 ']' 00:29:08.213 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1769949 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1769949 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1769949' 00:29:08.214 killing process with pid 1769949 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1769949 00:29:08.214 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.214 00:29:08.214 Latency(us) 00:29:08.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.214 =================================================================================================================== 00:29:08.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.214 19:21:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1769949 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1770375 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1770375 /var/tmp/bperf.sock 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1770375 ']' 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.472 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:08.472 [2024-07-24 19:21:14.106448] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:08.472 [2024-07-24 19:21:14.106556] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1770375 ] 00:29:08.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:08.472 Zero copy mechanism will not be used. 00:29:08.473 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.731 [2024-07-24 19:21:14.185863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.731 [2024-07-24 19:21:14.325558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.989 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.989 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:08.989 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:08.989 19:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.555 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:09.555 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.555 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.555 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.555 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.555 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.121 nvme0n1 00:29:10.121 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:10.121 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.121 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:10.121 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.121 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:10.121 19:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.380 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.380 Zero copy mechanism will not be used. 00:29:10.380 Running I/O for 2 seconds... 00:29:10.380 [2024-07-24 19:21:15.870873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.380 [2024-07-24 19:21:15.871342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.380 [2024-07-24 19:21:15.871392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.380 [2024-07-24 19:21:15.880198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.380 [2024-07-24 19:21:15.880669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.380 [2024-07-24 19:21:15.880712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.380 [2024-07-24 19:21:15.889182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.380 [2024-07-24 19:21:15.889696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.380 [2024-07-24 19:21:15.889738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.380 [2024-07-24 19:21:15.898421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.898951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.898991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.907931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.908387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.908436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.917634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.918170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.918210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.928276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.928732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.928773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.938464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.938982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.939022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.948569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.949079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.949128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.959043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.959552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.959592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.969875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.970328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.970366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.980280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.980788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.980827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:15.990634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:15.991150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:15.991190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.000222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.000689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.000728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.010226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.010752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.010792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.019331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.019757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.019797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.027386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.027898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.027939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.036182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.036643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.036683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.045006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.045418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.045468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.053836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.054253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.054292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.063085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.063514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.063553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.381 [2024-07-24 19:21:16.071602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.381 [2024-07-24 19:21:16.072074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.381 [2024-07-24 19:21:16.072114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.640 [2024-07-24 19:21:16.080330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.640 [2024-07-24 19:21:16.080736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.640 [2024-07-24 19:21:16.080789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.640 [2024-07-24 19:21:16.088531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.640 [2024-07-24 19:21:16.089020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.089060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.096755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.097251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.097289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.105079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.105544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.105585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.113170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.113630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.113671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.121383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.121834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.121874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.130290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.130746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.130790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.138488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.138898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.138937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.146570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.147070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.147110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.155443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.155891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.155930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.165107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.165592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.165632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.174475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.174939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.174989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.183473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.183937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.183984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.193019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.193515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.193555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.202881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.203367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.203407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.213251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.213738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.213778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.222497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.222915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.222954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.230448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.230897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.230937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.238837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.239284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.239323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.246986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.247492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.247532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.255534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.256000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.256038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.264339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.264851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.264890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.272622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.273072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.273111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.280542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.281031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.281070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.288369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.288861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.641 [2024-07-24 19:21:16.288901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.641 [2024-07-24 19:21:16.296348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.641 [2024-07-24 19:21:16.296764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.642 [2024-07-24 19:21:16.296804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.642 [2024-07-24 19:21:16.304391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.642 [2024-07-24 19:21:16.304811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.642 [2024-07-24 19:21:16.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.642 [2024-07-24 19:21:16.312445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.642 [2024-07-24 19:21:16.312908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.642 [2024-07-24 19:21:16.312947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.642 [2024-07-24 19:21:16.320760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.642 [2024-07-24 19:21:16.321177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.642 [2024-07-24 19:21:16.321216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.642 [2024-07-24 19:21:16.328584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.642 [2024-07-24 19:21:16.329086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.642 [2024-07-24 19:21:16.329125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.901 [2024-07-24 19:21:16.337583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.901 [2024-07-24 19:21:16.338008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.901 [2024-07-24 19:21:16.338048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.901 [2024-07-24 19:21:16.345847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.901 [2024-07-24 19:21:16.346339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.901 [2024-07-24 19:21:16.346378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.901 [2024-07-24 19:21:16.354364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.901 [2024-07-24 19:21:16.354837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.901 [2024-07-24 19:21:16.354876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.901 [2024-07-24 19:21:16.362356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.901 [2024-07-24 19:21:16.362770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.901 [2024-07-24 19:21:16.362817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.901 [2024-07-24 19:21:16.370658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.901 [2024-07-24 19:21:16.371121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.901 [2024-07-24 19:21:16.371159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.901 [2024-07-24 19:21:16.379166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.901 [2024-07-24 19:21:16.379666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.901 [2024-07-24 19:21:16.379706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.901 [2024-07-24 19:21:16.387548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.901 [2024-07-24 19:21:16.387963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.388003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.396273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.396740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.396781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.404149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.404572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.404619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.412893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.413325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.413364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.421963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.422375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.422414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.429522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.429932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.429971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.437075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.437492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.437531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.445641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.445905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.445943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.453758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.454143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.454182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.461936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.462350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.462390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.470835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.471271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.471310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.478893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.479301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.479341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.487089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.487469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.487508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.495803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.496279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.496317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.503627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.503996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.504035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.511008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.511449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.511488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.518640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.519039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.519081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.526300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.526689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.526730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.533994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.534365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.534408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.542345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.542862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.542913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.551233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.551705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.551746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.560072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.560449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.560502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.570093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.570488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.570527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.577867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.578241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.578280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.585551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.585925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.585964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:10.902 [2024-07-24 19:21:16.593581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:10.902 [2024-07-24 19:21:16.593980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.902 [2024-07-24 19:21:16.594019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.601389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.601784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.601823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.609364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.609807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.609849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.618541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.618918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.618958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.627176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.627575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.627614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.635820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.636194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.636232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.645149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.645541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.645581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.653195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.653573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.653612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.660700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.661068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.661108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.668304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.668679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.668718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.676407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.676785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.676824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.685785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.686209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.686248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.694951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.695336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.695375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.703985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.704355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.704394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.162 [2024-07-24 19:21:16.712572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.162 [2024-07-24 19:21:16.712942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.162 [2024-07-24 19:21:16.712980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.721794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.722164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.722203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.730758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.731129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.731168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.739420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.739799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.739838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.748274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.748653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.748692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.757086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.757466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.757504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.765623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.765999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.766045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.774998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.775365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.775405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.784056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.784425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.784474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.792344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.792723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.792762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.799958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.800329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.800368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.807467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.807836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.807875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.815067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.815444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.815483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.822872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.823241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.823279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.831551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.831924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.831962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.840731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.841109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.841148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.163 [2024-07-24 19:21:16.848779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.163 [2024-07-24 19:21:16.849160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.163 [2024-07-24 19:21:16.849198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.422 [2024-07-24 19:21:16.857675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.422 [2024-07-24 19:21:16.858048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.858086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.866404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.866819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.866858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.874289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.874671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.874709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.882958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.883329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.883368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.891981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.892348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.892387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.900761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.901134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.901174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.909128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.909512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.909551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.917759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.918129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.918167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.926661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.927028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.927066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.934336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.934742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.934781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.941895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.942264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.942302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.949439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.949813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.949852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.957015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.957382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.957420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.965705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.966076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.966115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.973660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.974031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.974068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.982574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.982943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.982988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:16.991444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:16.991819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:16.991858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.000215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.000629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.000668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.007793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.008164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.008202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.015426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.015812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.015849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.023015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.023383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.023421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.030584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.030953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.030993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.039251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.039633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.039672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.048098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.048492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.048531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.056972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.057340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.057378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.065880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.066246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.066283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.423 [2024-07-24 19:21:17.074246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.423 [2024-07-24 19:21:17.074622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.423 [2024-07-24 19:21:17.074660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.424 [2024-07-24 19:21:17.082020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.424 [2024-07-24 19:21:17.082445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.424 [2024-07-24 19:21:17.082485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.424 [2024-07-24 19:21:17.089532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.424 [2024-07-24 19:21:17.089902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.424 [2024-07-24 19:21:17.089941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.424 [2024-07-24 19:21:17.097076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.424 [2024-07-24 19:21:17.097452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.424 [2024-07-24 19:21:17.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.424 [2024-07-24 19:21:17.104597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.424 [2024-07-24 19:21:17.105021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.424 [2024-07-24 19:21:17.105059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.424 [2024-07-24 19:21:17.112689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.424 [2024-07-24 19:21:17.113058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.424 [2024-07-24 19:21:17.113097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.683 [2024-07-24 19:21:17.121370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.683 [2024-07-24 19:21:17.121760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-07-24 19:21:17.121806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.683 [2024-07-24 19:21:17.130102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.683 [2024-07-24 19:21:17.130482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.683 [2024-07-24 19:21:17.130522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.683 [2024-07-24 19:21:17.137547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.683 [2024-07-24 19:21:17.137922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.137961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.145073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.145461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.145500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.152628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.153000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.153040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.160317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.160699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.160738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.168939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.169324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.169363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.178192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.178571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.178610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.186086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.186463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.186501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.194549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.194971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.195010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.203313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.203687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.203726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.212232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.212705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.212744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.222417] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.222800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.222838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.231564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.231937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.231975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.240033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.240403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.240449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.249038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.249542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.249581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.258576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.258948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.258986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.266508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.266876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.266915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.274336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.274711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.274750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.282236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.282620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.282659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.289955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.290336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.290376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.297542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.297912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.297951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.305157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.305538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.305576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.312923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.313295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.313332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.320515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.320884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.320923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.328073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.328452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.328490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.335741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.336108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.684 [2024-07-24 19:21:17.336153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.684 [2024-07-24 19:21:17.343214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.684 [2024-07-24 19:21:17.343632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.685 [2024-07-24 19:21:17.343670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.685 [2024-07-24 19:21:17.350815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.685 [2024-07-24 19:21:17.351189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.685 [2024-07-24 19:21:17.351227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.685 [2024-07-24 19:21:17.358483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.685 [2024-07-24 19:21:17.358855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.685 [2024-07-24 19:21:17.358893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.685 [2024-07-24 19:21:17.366082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.685 [2024-07-24 19:21:17.366464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.685 [2024-07-24 19:21:17.366503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.685 [2024-07-24 19:21:17.373887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.685 [2024-07-24 19:21:17.374299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.685 [2024-07-24 19:21:17.374352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.381674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.382080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.382118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.389370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.389755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.389794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.397077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.397457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.397495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.404634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.405014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.405053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.412217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.412636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.412675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.419835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.420201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.420239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.427323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.427698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.427737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.435294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.435672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.435711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.442943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.443314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.443352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.450593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.450964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.451003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.458222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.458600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.458639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.465707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.466138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.466176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.473386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.473775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.473815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.481407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.481853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.481891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.489117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.489499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.489538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.496774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.497202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.497240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.504347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.504791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.504830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.944 [2024-07-24 19:21:17.511939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.944 [2024-07-24 19:21:17.512366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.944 [2024-07-24 19:21:17.512405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.519451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.519823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.519861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.527867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.528235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.528274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.536846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.537245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.537296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.546009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.546470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.546509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.555048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.555421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.555480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.564200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.564618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.564657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.573652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.574036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.574077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.583865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.584311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.584350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.594057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.594453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.594492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.604141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.604555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.604594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.614042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.614509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.614548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.624547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.624919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.624958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:11.945 [2024-07-24 19:21:17.633894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:11.945 [2024-07-24 19:21:17.634290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.945 [2024-07-24 19:21:17.634329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.644504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.644952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.644992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.654967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.655373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.655412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.665199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.665663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.665719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.674265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.674656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.674703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.682608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.683004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.683044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.691641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.692026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.692066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.699502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.699884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.699924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.707530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.707937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.707977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.715580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.715964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.716002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.724315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.724694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.724734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.733080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.733460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.733499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.741506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.741880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.741919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.749146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.749526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.749564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.756836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.757205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.757243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.205 [2024-07-24 19:21:17.764472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.205 [2024-07-24 19:21:17.764843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.205 [2024-07-24 19:21:17.764882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.772137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.772514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.772560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.780207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.780588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.780627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.789114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.789493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.789533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.796867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.797239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.797277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.805896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.806325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.806363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.815791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.816298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.816336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.824789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.825160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.825198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.833242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.833624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.833662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.842595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.842966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.843004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.851353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.851729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.851767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.206 [2024-07-24 19:21:17.860332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x234f8e0) with pdu=0x2000190fef90 00:29:12.206 [2024-07-24 19:21:17.860713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.206 [2024-07-24 19:21:17.860751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.206 00:29:12.206 Latency(us) 00:29:12.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.206 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:12.206 nvme0n1 : 2.00 3650.13 456.27 0.00 0.00 4372.20 3519.53 10971.21 00:29:12.206 =================================================================================================================== 00:29:12.206 Total : 3650.13 456.27 0.00 0.00 4372.20 3519.53 10971.21 00:29:12.206 0 00:29:12.206 19:21:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:12.206 19:21:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:12.206 19:21:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:12.206 19:21:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:12.206 | .driver_specific 00:29:12.206 | .nvme_error 00:29:12.206 | .status_code 00:29:12.206 | .command_transient_transport_error' 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 235 > 0 )) 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1770375 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1770375 ']' 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1770375 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1770375 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1770375' 00:29:12.806 killing process with pid 1770375 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1770375 00:29:12.806 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.806 00:29:12.806 Latency(us) 00:29:12.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.806 =================================================================================================================== 00:29:12.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.806 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1770375 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1768118 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1768118 ']' 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1768118 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1768118 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1768118' 00:29:13.065 killing process with pid 1768118 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1768118 00:29:13.065 19:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1768118 00:29:13.633 00:29:13.633 real 0m19.194s 00:29:13.633 user 0m39.205s 00:29:13.633 sys 0m5.424s 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.633 ************************************ 00:29:13.633 END TEST nvmf_digest_error 00:29:13.633 ************************************ 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:13.633 rmmod nvme_tcp 00:29:13.633 rmmod nvme_fabrics 00:29:13.633 rmmod nvme_keyring 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1768118 ']' 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1768118 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1768118 ']' 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1768118 00:29:13.633 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1768118) - No such process 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1768118 is not found' 00:29:13.633 Process with pid 1768118 is not found 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.633 19:21:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.535 00:29:15.535 real 0m43.789s 00:29:15.535 user 1m20.284s 00:29:15.535 sys 0m12.692s 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:15.535 ************************************ 00:29:15.535 END TEST nvmf_digest 00:29:15.535 ************************************ 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:15.535 19:21:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.794 ************************************ 00:29:15.794 START TEST nvmf_bdevperf 00:29:15.794 ************************************ 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:15.794 * Looking for test storage... 00:29:15.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:15.794 19:21:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:19.083 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:19.083 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:19.083 Found net devices under 0000:84:00.0: cvl_0_0 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.083 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:19.084 Found net devices under 0000:84:00.1: cvl_0_1 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:29:19.084 00:29:19.084 --- 10.0.0.2 ping statistics --- 00:29:19.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.084 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:29:19.084 00:29:19.084 --- 10.0.0.1 ping statistics --- 00:29:19.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.084 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1772990 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1772990 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1772990 ']' 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 [2024-07-24 19:21:24.330913] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:19.084 [2024-07-24 19:21:24.331016] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.084 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.084 [2024-07-24 19:21:24.416251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:19.084 [2024-07-24 19:21:24.560158] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.084 [2024-07-24 19:21:24.560229] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.084 [2024-07-24 19:21:24.560250] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.084 [2024-07-24 19:21:24.560267] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.084 [2024-07-24 19:21:24.560282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.084 [2024-07-24 19:21:24.560802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.084 [2024-07-24 19:21:24.560887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.084 [2024-07-24 19:21:24.560893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.084 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.084 [2024-07-24 19:21:24.760390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 Malloc0 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.344 [2024-07-24 19:21:24.835077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:19.344 { 00:29:19.344 "params": { 00:29:19.344 "name": "Nvme$subsystem", 00:29:19.344 "trtype": "$TEST_TRANSPORT", 00:29:19.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:19.344 "adrfam": "ipv4", 00:29:19.344 "trsvcid": "$NVMF_PORT", 00:29:19.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:19.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:19.344 "hdgst": ${hdgst:-false}, 00:29:19.344 "ddgst": ${ddgst:-false} 00:29:19.344 }, 00:29:19.344 "method": "bdev_nvme_attach_controller" 00:29:19.344 } 00:29:19.344 EOF 00:29:19.344 )") 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:19.344 19:21:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:19.344 "params": { 00:29:19.344 "name": "Nvme1", 00:29:19.344 "trtype": "tcp", 00:29:19.344 "traddr": "10.0.0.2", 00:29:19.344 "adrfam": "ipv4", 00:29:19.344 "trsvcid": "4420", 00:29:19.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:19.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:19.344 "hdgst": false, 00:29:19.344 "ddgst": false 00:29:19.344 }, 00:29:19.344 "method": "bdev_nvme_attach_controller" 00:29:19.344 }' 00:29:19.344 [2024-07-24 19:21:24.896526] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:19.344 [2024-07-24 19:21:24.896623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773134 ] 00:29:19.344 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.344 [2024-07-24 19:21:25.008863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.603 [2024-07-24 19:21:25.153901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.861 Running I/O for 1 seconds... 00:29:21.240 00:29:21.240 Latency(us) 00:29:21.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.240 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:21.240 Verification LBA range: start 0x0 length 0x4000 00:29:21.240 Nvme1n1 : 1.01 6363.54 24.86 0.00 0.00 20016.30 3665.16 15631.55 00:29:21.240 =================================================================================================================== 00:29:21.240 Total : 6363.54 24.86 0.00 0.00 20016.30 3665.16 15631.55 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1773277 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.240 { 00:29:21.240 "params": { 00:29:21.240 "name": "Nvme$subsystem", 00:29:21.240 "trtype": "$TEST_TRANSPORT", 00:29:21.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.240 "adrfam": "ipv4", 00:29:21.240 "trsvcid": "$NVMF_PORT", 00:29:21.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.240 "hdgst": ${hdgst:-false}, 00:29:21.240 "ddgst": ${ddgst:-false} 00:29:21.240 }, 00:29:21.240 "method": "bdev_nvme_attach_controller" 00:29:21.240 } 00:29:21.240 EOF 00:29:21.240 )") 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:21.240 19:21:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:21.240 "params": { 00:29:21.240 "name": "Nvme1", 00:29:21.240 "trtype": "tcp", 00:29:21.240 "traddr": "10.0.0.2", 00:29:21.240 "adrfam": "ipv4", 00:29:21.240 "trsvcid": "4420", 00:29:21.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:21.240 "hdgst": false, 00:29:21.240 "ddgst": false 00:29:21.240 }, 00:29:21.240 "method": "bdev_nvme_attach_controller" 00:29:21.240 }' 00:29:21.240 [2024-07-24 19:21:26.924438] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:21.240 [2024-07-24 19:21:26.924545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773277 ] 00:29:21.499 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.499 [2024-07-24 19:21:27.005919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.499 [2024-07-24 19:21:27.143167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.757 Running I/O for 15 seconds... 00:29:24.291 19:21:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1772990 00:29:24.291 19:21:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:24.291 [2024-07-24 19:21:29.881358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.881972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.881991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.882012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.882030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.882051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.882070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.882091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.291 [2024-07-24 19:21:29.882111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.291 [2024-07-24 19:21:29.882152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.292 [2024-07-24 19:21:29.882875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.882947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.882984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.883944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.883982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.292 [2024-07-24 19:21:29.884667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.292 [2024-07-24 19:21:29.884709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.884738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.884783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.884822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.884856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.884894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.884927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.884964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.884998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.885959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.885994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.293 [2024-07-24 19:21:29.886606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.886940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.886973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.887009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.887043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.887080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.887113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.887150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.887184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.887220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.887263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.887302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.293 [2024-07-24 19:21:29.887336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.293 [2024-07-24 19:21:29.887375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.887944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.887980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.888017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.888100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.888171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.888243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.888315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.888387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.294 [2024-07-24 19:21:29.888476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.888940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.888977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.889011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.889084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.889155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.889226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.889297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.889368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.294 [2024-07-24 19:21:29.889644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25038a0 is same with the state(5) to be set 00:29:24.294 [2024-07-24 19:21:29.889703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:24.294 [2024-07-24 19:21:29.889719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:24.294 [2024-07-24 19:21:29.889735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128608 len:8 PRP1 0x0 PRP2 0x0 00:29:24.294 [2024-07-24 19:21:29.889753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889829] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25038a0 was disconnected and freed. reset controller. 00:29:24.294 [2024-07-24 19:21:29.889920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.294 [2024-07-24 19:21:29.889949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.889970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.294 [2024-07-24 19:21:29.889997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.890017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.294 [2024-07-24 19:21:29.890034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.890053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:24.294 [2024-07-24 19:21:29.890071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.294 [2024-07-24 19:21:29.890088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.294 [2024-07-24 19:21:29.896440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.295 [2024-07-24 19:21:29.896509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.295 [2024-07-24 19:21:29.897706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.295 [2024-07-24 19:21:29.897792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.295 [2024-07-24 19:21:29.897832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.295 [2024-07-24 19:21:29.898371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.295 [2024-07-24 19:21:29.898813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.295 [2024-07-24 19:21:29.898869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.295 [2024-07-24 19:21:29.898907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.295 [2024-07-24 19:21:29.906019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.295 [2024-07-24 19:21:29.915356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.295 [2024-07-24 19:21:29.916181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.295 [2024-07-24 19:21:29.916253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.295 [2024-07-24 19:21:29.916294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.295 [2024-07-24 19:21:29.916707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.295 [2024-07-24 19:21:29.917271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.295 [2024-07-24 19:21:29.917324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.295 [2024-07-24 19:21:29.917358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.295 [2024-07-24 19:21:29.924484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.295 [2024-07-24 19:21:29.934078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.295 [2024-07-24 19:21:29.934842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.295 [2024-07-24 19:21:29.934913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.295 [2024-07-24 19:21:29.934953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.295 [2024-07-24 19:21:29.935543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.295 [2024-07-24 19:21:29.935968] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.295 [2024-07-24 19:21:29.936021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.295 [2024-07-24 19:21:29.936056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.295 [2024-07-24 19:21:29.943186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.295 [2024-07-24 19:21:29.952953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.295 [2024-07-24 19:21:29.953782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.295 [2024-07-24 19:21:29.953853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.295 [2024-07-24 19:21:29.953876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.295 [2024-07-24 19:21:29.954283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.295 [2024-07-24 19:21:29.954724] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.295 [2024-07-24 19:21:29.954779] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.295 [2024-07-24 19:21:29.954813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.295 [2024-07-24 19:21:29.961984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.295 [2024-07-24 19:21:29.972033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.295 [2024-07-24 19:21:29.972855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.295 [2024-07-24 19:21:29.972927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.295 [2024-07-24 19:21:29.972967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.295 [2024-07-24 19:21:29.973528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.295 [2024-07-24 19:21:29.974073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.295 [2024-07-24 19:21:29.974126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.295 [2024-07-24 19:21:29.974160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.295 [2024-07-24 19:21:29.981289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.556 [2024-07-24 19:21:29.988663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.556 [2024-07-24 19:21:29.989454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.556 [2024-07-24 19:21:29.989509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.556 [2024-07-24 19:21:29.989532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.556 [2024-07-24 19:21:29.989962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.556 [2024-07-24 19:21:29.990533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.556 [2024-07-24 19:21:29.990587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.556 [2024-07-24 19:21:29.990634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.556 [2024-07-24 19:21:29.997742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.556 [2024-07-24 19:21:30.006605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.556 [2024-07-24 19:21:30.007316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.556 [2024-07-24 19:21:30.007390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.556 [2024-07-24 19:21:30.007451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.556 [2024-07-24 19:21:30.007840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.556 [2024-07-24 19:21:30.008388] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.556 [2024-07-24 19:21:30.008457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.556 [2024-07-24 19:21:30.008501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.556 [2024-07-24 19:21:30.015658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.556 [2024-07-24 19:21:30.025192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.556 [2024-07-24 19:21:30.025897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.025980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.026022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.026563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.026989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.027044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.027078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.034093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.043007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.043764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.043836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.043877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.044413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.044842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.044896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.044931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.052056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.061005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.061770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.061857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.061901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.062462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.062876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.062930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.062965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.069991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.078065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.078800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.078877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.078918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.079482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.079894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.079949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.079984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.087062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.096358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.096985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.097057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.097097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.097584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.098090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.098147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.098183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.105239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.114006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.114768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.114840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.114881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.115417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.115887] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.115942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.115976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.123250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.131621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.132400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.132489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.132512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.132825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.133375] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.133443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.133490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.139191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.148339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.148975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.149047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.149087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.149578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.150060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.150114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.150148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.156682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.165853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.166691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.166729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.166751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.167298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.167727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.167782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.167817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.174862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.183749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.184725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.557 [2024-07-24 19:21:30.184824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.557 [2024-07-24 19:21:30.184870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.557 [2024-07-24 19:21:30.185422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.557 [2024-07-24 19:21:30.185863] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.557 [2024-07-24 19:21:30.185917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.557 [2024-07-24 19:21:30.185951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.557 [2024-07-24 19:21:30.193079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.557 [2024-07-24 19:21:30.201668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.557 [2024-07-24 19:21:30.202538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.558 [2024-07-24 19:21:30.202613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.558 [2024-07-24 19:21:30.202654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.558 [2024-07-24 19:21:30.203193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.558 [2024-07-24 19:21:30.203663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.558 [2024-07-24 19:21:30.203694] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.558 [2024-07-24 19:21:30.203712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.558 [2024-07-24 19:21:30.210870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.558 [2024-07-24 19:21:30.219970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.558 [2024-07-24 19:21:30.220839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.558 [2024-07-24 19:21:30.220911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.558 [2024-07-24 19:21:30.220951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.558 [2024-07-24 19:21:30.221525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.558 [2024-07-24 19:21:30.221934] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.558 [2024-07-24 19:21:30.221987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.558 [2024-07-24 19:21:30.222021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.558 [2024-07-24 19:21:30.229171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.558 [2024-07-24 19:21:30.238148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.558 [2024-07-24 19:21:30.238894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.558 [2024-07-24 19:21:30.238964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.558 [2024-07-24 19:21:30.239018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.558 [2024-07-24 19:21:30.239562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.558 [2024-07-24 19:21:30.239998] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.558 [2024-07-24 19:21:30.240052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.558 [2024-07-24 19:21:30.240086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.558 [2024-07-24 19:21:30.246902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.255049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.255790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.255860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.255899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.256240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.256616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.256646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.256664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.263660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.273966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.274847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.274920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.274961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.275523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.276070] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.276122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.276155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.283279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.291992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.292867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.292940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.292981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.293523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.293937] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.294004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.294040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.301154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.309966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.310799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.310871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.310911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.311472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.311903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.311957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.311991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.318951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.327600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.328411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.328497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.328520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.328890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.329452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.329503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.329522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.336836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.346249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.346945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.347015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.347056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.347575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.348010] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.348063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.348097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.355003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.364393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.365286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.365357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.365398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.365789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.366338] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.366390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.366424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.373513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.382635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.383494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.383565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.383606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.384142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.384625] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.818 [2024-07-24 19:21:30.384656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.818 [2024-07-24 19:21:30.384674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.818 [2024-07-24 19:21:30.390237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.818 [2024-07-24 19:21:30.400554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.818 [2024-07-24 19:21:30.401362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.818 [2024-07-24 19:21:30.401457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.818 [2024-07-24 19:21:30.401504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.818 [2024-07-24 19:21:30.401870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.818 [2024-07-24 19:21:30.402417] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.819 [2024-07-24 19:21:30.402492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.819 [2024-07-24 19:21:30.402512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.819 [2024-07-24 19:21:30.409609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.819 [2024-07-24 19:21:30.418708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.819 [2024-07-24 19:21:30.419587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.819 [2024-07-24 19:21:30.419658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.819 [2024-07-24 19:21:30.419698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.819 [2024-07-24 19:21:30.420247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.819 [2024-07-24 19:21:30.420688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.819 [2024-07-24 19:21:30.420736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.819 [2024-07-24 19:21:30.420773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.819 [2024-07-24 19:21:30.427947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.819 [2024-07-24 19:21:30.436501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.819 [2024-07-24 19:21:30.437141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.819 [2024-07-24 19:21:30.437212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.819 [2024-07-24 19:21:30.437253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.819 [2024-07-24 19:21:30.437797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.819 [2024-07-24 19:21:30.438210] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.819 [2024-07-24 19:21:30.438263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.819 [2024-07-24 19:21:30.438298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.819 [2024-07-24 19:21:30.445143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.819 [2024-07-24 19:21:30.454213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.819 [2024-07-24 19:21:30.454884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.819 [2024-07-24 19:21:30.454955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.819 [2024-07-24 19:21:30.454995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.819 [2024-07-24 19:21:30.455526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.819 [2024-07-24 19:21:30.455898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.819 [2024-07-24 19:21:30.455951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.819 [2024-07-24 19:21:30.455985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.819 [2024-07-24 19:21:30.462958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.819 [2024-07-24 19:21:30.471638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.819 [2024-07-24 19:21:30.472469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.819 [2024-07-24 19:21:30.472535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.819 [2024-07-24 19:21:30.472557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.819 [2024-07-24 19:21:30.472985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.819 [2024-07-24 19:21:30.473554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.819 [2024-07-24 19:21:30.473585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.819 [2024-07-24 19:21:30.473611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.819 [2024-07-24 19:21:30.480563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.819 [2024-07-24 19:21:30.489191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.819 [2024-07-24 19:21:30.489899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.819 [2024-07-24 19:21:30.489970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.819 [2024-07-24 19:21:30.490010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.819 [2024-07-24 19:21:30.490540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.819 [2024-07-24 19:21:30.490990] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.819 [2024-07-24 19:21:30.491044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.819 [2024-07-24 19:21:30.491078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.819 [2024-07-24 19:21:30.498190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.819 [2024-07-24 19:21:30.506761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.819 [2024-07-24 19:21:30.507583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.819 [2024-07-24 19:21:30.507623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:24.819 [2024-07-24 19:21:30.507646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:24.819 [2024-07-24 19:21:30.508176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:24.819 [2024-07-24 19:21:30.508641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.819 [2024-07-24 19:21:30.508670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.819 [2024-07-24 19:21:30.508687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.080 [2024-07-24 19:21:30.514071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.080 [2024-07-24 19:21:30.521709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.080 [2024-07-24 19:21:30.522219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-07-24 19:21:30.522257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.080 [2024-07-24 19:21:30.522279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.080 [2024-07-24 19:21:30.522595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.080 [2024-07-24 19:21:30.522907] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.080 [2024-07-24 19:21:30.522936] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.080 [2024-07-24 19:21:30.522955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.080 [2024-07-24 19:21:30.527456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.080 [2024-07-24 19:21:30.536796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.080 [2024-07-24 19:21:30.537380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-07-24 19:21:30.537425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.080 [2024-07-24 19:21:30.537462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.080 [2024-07-24 19:21:30.537779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.080 [2024-07-24 19:21:30.538078] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.080 [2024-07-24 19:21:30.538106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.080 [2024-07-24 19:21:30.538125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.080 [2024-07-24 19:21:30.542635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.080 [2024-07-24 19:21:30.551712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.080 [2024-07-24 19:21:30.552203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-07-24 19:21:30.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.080 [2024-07-24 19:21:30.552262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.080 [2024-07-24 19:21:30.552578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.080 [2024-07-24 19:21:30.552888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.080 [2024-07-24 19:21:30.552918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.080 [2024-07-24 19:21:30.552936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.080 [2024-07-24 19:21:30.557437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.080 [2024-07-24 19:21:30.566776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.080 [2024-07-24 19:21:30.567339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-07-24 19:21:30.567377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.080 [2024-07-24 19:21:30.567398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.080 [2024-07-24 19:21:30.567725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.080 [2024-07-24 19:21:30.568024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.080 [2024-07-24 19:21:30.568053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.080 [2024-07-24 19:21:30.568071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.080 [2024-07-24 19:21:30.575085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.080 [2024-07-24 19:21:30.583744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.080 [2024-07-24 19:21:30.584543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-07-24 19:21:30.584582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.080 [2024-07-24 19:21:30.584604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.080 [2024-07-24 19:21:30.585089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.080 [2024-07-24 19:21:30.585590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.080 [2024-07-24 19:21:30.585621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.080 [2024-07-24 19:21:30.585640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.080 [2024-07-24 19:21:30.592562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.080 [2024-07-24 19:21:30.601293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.080 [2024-07-24 19:21:30.601965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-07-24 19:21:30.602016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.080 [2024-07-24 19:21:30.602044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.080 [2024-07-24 19:21:30.602423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.080 [2024-07-24 19:21:30.602779] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.080 [2024-07-24 19:21:30.602817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.080 [2024-07-24 19:21:30.602841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.080 [2024-07-24 19:21:30.609728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.080 [2024-07-24 19:21:30.618825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.080 [2024-07-24 19:21:30.619627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.080 [2024-07-24 19:21:30.619665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.080 [2024-07-24 19:21:30.619687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.080 [2024-07-24 19:21:30.620188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.080 [2024-07-24 19:21:30.620641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.620671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.620690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.627732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.636489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.637187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.637256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.637296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.637695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.638213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.638266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.638300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.644576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.654267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.654991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.655062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.655103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.655585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.656019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.656072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.656107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.663080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.671641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.672481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.672519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.672541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.672981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.673522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.673552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.673571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.680588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.689462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.690204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.690274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.690314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.690739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.691290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.691342] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.691375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.698359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.707532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.708292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.708388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.708450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.708855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.709401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.709470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.709515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.716520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.725374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.726125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.726196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.726236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.726794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.727340] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.727394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.727444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.734522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.743297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.743994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.744064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.744105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.744607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.745087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.745141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.745175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.752270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.081 [2024-07-24 19:21:30.760929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.081 [2024-07-24 19:21:30.761754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.081 [2024-07-24 19:21:30.761825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.081 [2024-07-24 19:21:30.761865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.081 [2024-07-24 19:21:30.762399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.081 [2024-07-24 19:21:30.762850] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.081 [2024-07-24 19:21:30.762905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.081 [2024-07-24 19:21:30.762938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.081 [2024-07-24 19:21:30.769842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.342 [2024-07-24 19:21:30.777707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.778590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.778629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.343 [2024-07-24 19:21:30.778651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.343 [2024-07-24 19:21:30.779087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.343 [2024-07-24 19:21:30.779535] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.343 [2024-07-24 19:21:30.779566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.343 [2024-07-24 19:21:30.779584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.343 [2024-07-24 19:21:30.786388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.343 [2024-07-24 19:21:30.796292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.797039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.797109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.343 [2024-07-24 19:21:30.797149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.343 [2024-07-24 19:21:30.797560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.343 [2024-07-24 19:21:30.798017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.343 [2024-07-24 19:21:30.798070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.343 [2024-07-24 19:21:30.798104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.343 [2024-07-24 19:21:30.805272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.343 [2024-07-24 19:21:30.814343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.815199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.815278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.343 [2024-07-24 19:21:30.815320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.343 [2024-07-24 19:21:30.815747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.343 [2024-07-24 19:21:30.816297] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.343 [2024-07-24 19:21:30.816349] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.343 [2024-07-24 19:21:30.816382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.343 [2024-07-24 19:21:30.823504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.343 [2024-07-24 19:21:30.832614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.833514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.833586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.343 [2024-07-24 19:21:30.833626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.343 [2024-07-24 19:21:30.834162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.343 [2024-07-24 19:21:30.834640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.343 [2024-07-24 19:21:30.834671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.343 [2024-07-24 19:21:30.834689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.343 [2024-07-24 19:21:30.841783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.343 [2024-07-24 19:21:30.850881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.851837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.851909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.343 [2024-07-24 19:21:30.851949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.343 [2024-07-24 19:21:30.852527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.343 [2024-07-24 19:21:30.852969] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.343 [2024-07-24 19:21:30.853022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.343 [2024-07-24 19:21:30.853056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.343 [2024-07-24 19:21:30.860061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.343 [2024-07-24 19:21:30.868902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.869773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.869843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.343 [2024-07-24 19:21:30.869882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.343 [2024-07-24 19:21:30.870418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.343 [2024-07-24 19:21:30.870855] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.343 [2024-07-24 19:21:30.870910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.343 [2024-07-24 19:21:30.870944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.343 [2024-07-24 19:21:30.877880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.343 [2024-07-24 19:21:30.886769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.887608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.887685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.343 [2024-07-24 19:21:30.887738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.343 [2024-07-24 19:21:30.888276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.343 [2024-07-24 19:21:30.888702] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.343 [2024-07-24 19:21:30.888732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.343 [2024-07-24 19:21:30.888751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.343 [2024-07-24 19:21:30.894082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.343 [2024-07-24 19:21:30.904522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.343 [2024-07-24 19:21:30.905244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.343 [2024-07-24 19:21:30.905314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:30.905354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:30.905761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:30.906310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.344 [2024-07-24 19:21:30.906362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.344 [2024-07-24 19:21:30.906396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.344 [2024-07-24 19:21:30.913966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.344 [2024-07-24 19:21:30.923554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.344 [2024-07-24 19:21:30.924452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.344 [2024-07-24 19:21:30.924524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:30.924546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:30.924964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:30.925533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.344 [2024-07-24 19:21:30.925563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.344 [2024-07-24 19:21:30.925581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.344 [2024-07-24 19:21:30.932630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.344 [2024-07-24 19:21:30.941787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.344 [2024-07-24 19:21:30.942658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.344 [2024-07-24 19:21:30.942730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:30.942771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:30.943306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:30.943739] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.344 [2024-07-24 19:21:30.943808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.344 [2024-07-24 19:21:30.943844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.344 [2024-07-24 19:21:30.950932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.344 [2024-07-24 19:21:30.959835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.344 [2024-07-24 19:21:30.960779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.344 [2024-07-24 19:21:30.960850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:30.960890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:30.961426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:30.961861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.344 [2024-07-24 19:21:30.961914] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.344 [2024-07-24 19:21:30.961949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.344 [2024-07-24 19:21:30.969092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.344 [2024-07-24 19:21:30.977739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.344 [2024-07-24 19:21:30.978636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.344 [2024-07-24 19:21:30.978708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:30.978748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:30.979283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:30.979727] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.344 [2024-07-24 19:21:30.979756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.344 [2024-07-24 19:21:30.979773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.344 [2024-07-24 19:21:30.986877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.344 [2024-07-24 19:21:30.995931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.344 [2024-07-24 19:21:30.996857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.344 [2024-07-24 19:21:30.996929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:30.996970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:30.997536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:30.997947] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.344 [2024-07-24 19:21:30.998000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.344 [2024-07-24 19:21:30.998033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.344 [2024-07-24 19:21:31.005145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.344 [2024-07-24 19:21:31.014646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.344 [2024-07-24 19:21:31.015557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.344 [2024-07-24 19:21:31.015630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:31.015671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:31.016206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:31.016668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.344 [2024-07-24 19:21:31.016698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.344 [2024-07-24 19:21:31.016716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.344 [2024-07-24 19:21:31.023827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.344 [2024-07-24 19:21:31.032706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.344 [2024-07-24 19:21:31.033581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.344 [2024-07-24 19:21:31.033619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.344 [2024-07-24 19:21:31.033641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.344 [2024-07-24 19:21:31.034008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.344 [2024-07-24 19:21:31.034470] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.345 [2024-07-24 19:21:31.034525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.345 [2024-07-24 19:21:31.034544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.606 [2024-07-24 19:21:31.040550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.606 [2024-07-24 19:21:31.050404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.606 [2024-07-24 19:21:31.051123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.606 [2024-07-24 19:21:31.051194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.606 [2024-07-24 19:21:31.051235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.606 [2024-07-24 19:21:31.051658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.606 [2024-07-24 19:21:31.052108] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.606 [2024-07-24 19:21:31.052161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.606 [2024-07-24 19:21:31.052194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.606 [2024-07-24 19:21:31.059299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.606 [2024-07-24 19:21:31.068122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.606 [2024-07-24 19:21:31.068863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.606 [2024-07-24 19:21:31.068935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.606 [2024-07-24 19:21:31.068975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.606 [2024-07-24 19:21:31.069525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.606 [2024-07-24 19:21:31.069964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.606 [2024-07-24 19:21:31.070018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.606 [2024-07-24 19:21:31.070051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.606 [2024-07-24 19:21:31.077603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.606 [2024-07-24 19:21:31.086856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.606 [2024-07-24 19:21:31.087706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.606 [2024-07-24 19:21:31.087776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.606 [2024-07-24 19:21:31.087817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.606 [2024-07-24 19:21:31.088352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.606 [2024-07-24 19:21:31.088921] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.606 [2024-07-24 19:21:31.088975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.606 [2024-07-24 19:21:31.089009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.606 [2024-07-24 19:21:31.096635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.606 [2024-07-24 19:21:31.105033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.606 [2024-07-24 19:21:31.105971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.606 [2024-07-24 19:21:31.106041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.606 [2024-07-24 19:21:31.106080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.606 [2024-07-24 19:21:31.106641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.606 [2024-07-24 19:21:31.107188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.606 [2024-07-24 19:21:31.107242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.606 [2024-07-24 19:21:31.107275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.606 [2024-07-24 19:21:31.114421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.606 [2024-07-24 19:21:31.122826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.606 [2024-07-24 19:21:31.123678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.606 [2024-07-24 19:21:31.123747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.606 [2024-07-24 19:21:31.123787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.606 [2024-07-24 19:21:31.124321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.606 [2024-07-24 19:21:31.124759] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.606 [2024-07-24 19:21:31.124814] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.606 [2024-07-24 19:21:31.124862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.606 [2024-07-24 19:21:31.131970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.606 [2024-07-24 19:21:31.141074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.606 [2024-07-24 19:21:31.141879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.606 [2024-07-24 19:21:31.141950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.606 [2024-07-24 19:21:31.141990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.606 [2024-07-24 19:21:31.142534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.606 [2024-07-24 19:21:31.142833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.606 [2024-07-24 19:21:31.142862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.606 [2024-07-24 19:21:31.142880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.606 [2024-07-24 19:21:31.148278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.606 [2024-07-24 19:21:31.160041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.606 [2024-07-24 19:21:31.160939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.606 [2024-07-24 19:21:31.161010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.161050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.161607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.162156] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.162207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.162240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.169309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.177991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.178713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.178752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.178798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.179334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.179746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.179802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.179838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.186869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.195627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.196405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.196495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.196518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.196848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.197394] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.197482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.197503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.204291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.213391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.214099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.214169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.214208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.214646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.215137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.215190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.215224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.222186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.230842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.231780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.231853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.231894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.232455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.232833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.232886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.232920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.240019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.248692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.249545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.249616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.249656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.250193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.250667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.250698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.250716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.257707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.266535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.267262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.267332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.267371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.267770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.268318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.268371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.268403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.275422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.284131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.284888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.284959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.284998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.285535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.285977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.286030] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.286064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.607 [2024-07-24 19:21:31.293137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.607 [2024-07-24 19:21:31.299319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.607 [2024-07-24 19:21:31.299853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.607 [2024-07-24 19:21:31.299890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.607 [2024-07-24 19:21:31.299911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.607 [2024-07-24 19:21:31.300191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.607 [2024-07-24 19:21:31.300492] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.607 [2024-07-24 19:21:31.300520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.607 [2024-07-24 19:21:31.300538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.869 [2024-07-24 19:21:31.304763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.869 [2024-07-24 19:21:31.313931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.869 [2024-07-24 19:21:31.314505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.869 [2024-07-24 19:21:31.314542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.869 [2024-07-24 19:21:31.314563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.869 [2024-07-24 19:21:31.314842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.869 [2024-07-24 19:21:31.315127] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.869 [2024-07-24 19:21:31.315154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.869 [2024-07-24 19:21:31.315171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.869 [2024-07-24 19:21:31.319369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.869 [2024-07-24 19:21:31.328538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.869 [2024-07-24 19:21:31.329031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.869 [2024-07-24 19:21:31.329068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.869 [2024-07-24 19:21:31.329088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.869 [2024-07-24 19:21:31.329367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.869 [2024-07-24 19:21:31.329664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.869 [2024-07-24 19:21:31.329693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.869 [2024-07-24 19:21:31.329710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.869 [2024-07-24 19:21:31.333908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.869 [2024-07-24 19:21:31.343091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.869 [2024-07-24 19:21:31.343538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.869 [2024-07-24 19:21:31.343575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.869 [2024-07-24 19:21:31.343596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.869 [2024-07-24 19:21:31.343877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.869 [2024-07-24 19:21:31.344163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.869 [2024-07-24 19:21:31.344190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.869 [2024-07-24 19:21:31.344209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.869 [2024-07-24 19:21:31.348407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.869 [2024-07-24 19:21:31.357844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.869 [2024-07-24 19:21:31.358393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.869 [2024-07-24 19:21:31.358439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.869 [2024-07-24 19:21:31.358473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.869 [2024-07-24 19:21:31.358755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.869 [2024-07-24 19:21:31.359039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.869 [2024-07-24 19:21:31.359067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.869 [2024-07-24 19:21:31.359084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.869 [2024-07-24 19:21:31.363282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.869 [2024-07-24 19:21:31.372472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.869 [2024-07-24 19:21:31.372995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.869 [2024-07-24 19:21:31.373032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.869 [2024-07-24 19:21:31.373052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.869 [2024-07-24 19:21:31.373332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.869 [2024-07-24 19:21:31.373629] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.869 [2024-07-24 19:21:31.373657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.869 [2024-07-24 19:21:31.373675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.869 [2024-07-24 19:21:31.377888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.869 [2024-07-24 19:21:31.387065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.387549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.387586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.387607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.387887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.388172] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.388198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.388216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.392419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.401594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.402211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.402262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.402285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.402586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.402874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.402909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.402928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.407150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.416327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.416898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.416937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.416959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.417239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.417539] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.417567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.417585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.421784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.430990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.431554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.431592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.431612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.431892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.432176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.432203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.432221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.436419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.445616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.446240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.446291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.446315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.446618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.446906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.446933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.446951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.451156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.460340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.460890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.460928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.460950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.461230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.461529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.461559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.461576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.465775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.474948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.475511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.475548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.475568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.475849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.476134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.476161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.476179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.480382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.489555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.490089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.490126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.490147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.490438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.490723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.490751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.490769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.494963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.504144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.504690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.504727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.504754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.505035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.505319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.505347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.505364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.509574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.518746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.519281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.519318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.870 [2024-07-24 19:21:31.519339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.870 [2024-07-24 19:21:31.519640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.870 [2024-07-24 19:21:31.519925] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.870 [2024-07-24 19:21:31.519953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.870 [2024-07-24 19:21:31.519971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.870 [2024-07-24 19:21:31.524166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.870 [2024-07-24 19:21:31.533330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.870 [2024-07-24 19:21:31.533861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.870 [2024-07-24 19:21:31.533897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.871 [2024-07-24 19:21:31.533918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.871 [2024-07-24 19:21:31.534197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.871 [2024-07-24 19:21:31.534493] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.871 [2024-07-24 19:21:31.534521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.871 [2024-07-24 19:21:31.534539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.871 [2024-07-24 19:21:31.538741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.871 [2024-07-24 19:21:31.547913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.871 [2024-07-24 19:21:31.548504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.871 [2024-07-24 19:21:31.548543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.871 [2024-07-24 19:21:31.548564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.871 [2024-07-24 19:21:31.548844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.871 [2024-07-24 19:21:31.549129] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.871 [2024-07-24 19:21:31.549167] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.871 [2024-07-24 19:21:31.549186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.871 [2024-07-24 19:21:31.553386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.871 [2024-07-24 19:21:31.562568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.871 [2024-07-24 19:21:31.563106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.871 [2024-07-24 19:21:31.563143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:25.871 [2024-07-24 19:21:31.563164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:25.871 [2024-07-24 19:21:31.563458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:25.871 [2024-07-24 19:21:31.563744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.871 [2024-07-24 19:21:31.563771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.871 [2024-07-24 19:21:31.563789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.567987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.577172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.577739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.577776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.577797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.578076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.578361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.578389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.578406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.582612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.591780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.592397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.592458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.592483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.592771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.593057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.593085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.593103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.597307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.606504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.607045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.607083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.607104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.607384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.607682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.607710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.607728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.611926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.621098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.621566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.621604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.621625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.621904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.622189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.622216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.622234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.626442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.635632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.636093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.636130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.636151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.636447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.636733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.636761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.636780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.640979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.650151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.650600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.650637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.650658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.650944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.651230] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.651257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.651274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.655494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.664710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.665227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.665264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.665285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.665576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.665861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.665889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.665907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.670106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.679297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.679788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.679825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.679846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.131 [2024-07-24 19:21:31.680126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.131 [2024-07-24 19:21:31.680410] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.131 [2024-07-24 19:21:31.680450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.131 [2024-07-24 19:21:31.680469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.131 [2024-07-24 19:21:31.684669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.131 [2024-07-24 19:21:31.693853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.131 [2024-07-24 19:21:31.694416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.131 [2024-07-24 19:21:31.694464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.131 [2024-07-24 19:21:31.694485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.694766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.695051] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.695078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.695103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.699303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.708513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.709010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.709048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.709069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.709349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.709644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.709673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.709691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.713887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.723059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.723541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.723579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.723600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.723881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.724165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.724193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.724211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.728406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.737595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.738122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.738158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.738179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.738469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.738755] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.738783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.738801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.743000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.752168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.752615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.752658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.752681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.752961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.753246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.753274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.753291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.757502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.766680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.767188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.767225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.767245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.767536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.767821] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.767849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.767866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.772073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.781301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.781767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.781804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.781825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.782105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.782390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.782418] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.782447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.786649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.795827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.796308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.796344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.796365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.796655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.796949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.796976] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.796995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.801190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.810371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.810865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.810901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.810923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.811202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.811519] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.811550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.811568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.132 [2024-07-24 19:21:31.815768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.132 [2024-07-24 19:21:31.824933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.132 [2024-07-24 19:21:31.825425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.132 [2024-07-24 19:21:31.825470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.132 [2024-07-24 19:21:31.825492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.132 [2024-07-24 19:21:31.825771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.132 [2024-07-24 19:21:31.826056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.132 [2024-07-24 19:21:31.826083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.132 [2024-07-24 19:21:31.826101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.393 [2024-07-24 19:21:31.830296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.393 [2024-07-24 19:21:31.839484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.393 [2024-07-24 19:21:31.840013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.393 [2024-07-24 19:21:31.840050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.393 [2024-07-24 19:21:31.840071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.393 [2024-07-24 19:21:31.840350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.393 [2024-07-24 19:21:31.840644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.393 [2024-07-24 19:21:31.840681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.393 [2024-07-24 19:21:31.840698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.393 [2024-07-24 19:21:31.844906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.393 [2024-07-24 19:21:31.854078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.393 [2024-07-24 19:21:31.854674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.393 [2024-07-24 19:21:31.854712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.393 [2024-07-24 19:21:31.854733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.393 [2024-07-24 19:21:31.855012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.393 [2024-07-24 19:21:31.855319] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.393 [2024-07-24 19:21:31.855348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.393 [2024-07-24 19:21:31.855365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.393 [2024-07-24 19:21:31.859575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.393 [2024-07-24 19:21:31.868746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.393 [2024-07-24 19:21:31.869256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.393 [2024-07-24 19:21:31.869292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.393 [2024-07-24 19:21:31.869313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.393 [2024-07-24 19:21:31.869603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.393 [2024-07-24 19:21:31.869888] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.393 [2024-07-24 19:21:31.869915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.393 [2024-07-24 19:21:31.869933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.393 [2024-07-24 19:21:31.874131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.393 [2024-07-24 19:21:31.883335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.393 [2024-07-24 19:21:31.883889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.393 [2024-07-24 19:21:31.883926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.393 [2024-07-24 19:21:31.883947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.393 [2024-07-24 19:21:31.884227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.393 [2024-07-24 19:21:31.884523] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.393 [2024-07-24 19:21:31.884551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.393 [2024-07-24 19:21:31.884569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.393 [2024-07-24 19:21:31.888768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.393 [2024-07-24 19:21:31.897941] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.393 [2024-07-24 19:21:31.898461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.393 [2024-07-24 19:21:31.898505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.393 [2024-07-24 19:21:31.898533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.393 [2024-07-24 19:21:31.898814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.393 [2024-07-24 19:21:31.899099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.393 [2024-07-24 19:21:31.899126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.393 [2024-07-24 19:21:31.899144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.393 [2024-07-24 19:21:31.903346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.393 [2024-07-24 19:21:31.912540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:31.913154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:31.913205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:31.913228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:31.913530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:31.913817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:31.913846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:31.913863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:31.918067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:31.927246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:31.927849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:31.927888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:31.927909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:31.928190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:31.928491] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:31.928519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:31.928537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:31.932733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:31.941912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:31.942471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:31.942509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:31.942530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:31.942810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:31.943095] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:31.943130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:31.943148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:31.947353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:31.956530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:31.957100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:31.957138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:31.957159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:31.957452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:31.957741] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:31.957768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:31.957786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:31.961982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:31.971151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:31.971700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:31.971738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:31.971758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:31.972038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:31.972323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:31.972350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:31.972368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:31.976593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:31.985769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:31.986349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:31.986386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:31.986407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:31.986700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:31.986985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:31.987013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:31.987031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:31.991230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:32.000401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:32.000954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:32.000992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:32.001013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:32.001293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:32.001590] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:32.001619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:32.001636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:32.005912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:32.015088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:32.015668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:32.015706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:32.015728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:32.016007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:32.016292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:32.016319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:32.016337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:32.020545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:32.029720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:32.030245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:32.030282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:32.030303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:32.030595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:32.030882] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:32.030909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:32.030926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:32.035121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:32.044296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:32.044860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.394 [2024-07-24 19:21:32.044897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.394 [2024-07-24 19:21:32.044918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.394 [2024-07-24 19:21:32.045205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.394 [2024-07-24 19:21:32.045504] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.394 [2024-07-24 19:21:32.045533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.394 [2024-07-24 19:21:32.045551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.394 [2024-07-24 19:21:32.049744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.394 [2024-07-24 19:21:32.058914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.394 [2024-07-24 19:21:32.059513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.395 [2024-07-24 19:21:32.059561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.395 [2024-07-24 19:21:32.059582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.395 [2024-07-24 19:21:32.059862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.395 [2024-07-24 19:21:32.060146] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.395 [2024-07-24 19:21:32.060174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.395 [2024-07-24 19:21:32.060191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.395 [2024-07-24 19:21:32.064395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.395 [2024-07-24 19:21:32.073577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.395 [2024-07-24 19:21:32.074130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.395 [2024-07-24 19:21:32.074167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.395 [2024-07-24 19:21:32.074188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.395 [2024-07-24 19:21:32.074480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.395 [2024-07-24 19:21:32.074765] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.395 [2024-07-24 19:21:32.074792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.395 [2024-07-24 19:21:32.074810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.395 [2024-07-24 19:21:32.079002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.395 [2024-07-24 19:21:32.088176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.656 [2024-07-24 19:21:32.088711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.656 [2024-07-24 19:21:32.088749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.656 [2024-07-24 19:21:32.088770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.656 [2024-07-24 19:21:32.089049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.656 [2024-07-24 19:21:32.089334] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.656 [2024-07-24 19:21:32.089361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.656 [2024-07-24 19:21:32.089386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.656 [2024-07-24 19:21:32.093597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.656 [2024-07-24 19:21:32.102769] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.103343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.103380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.103401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.103691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.103976] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.104004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.104022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.108238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.117407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.117912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.117949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.117969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.118248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.118547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.118575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.118593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.122799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.131962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.132443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.132481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.132502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.132781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.133066] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.133093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.133111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.137318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.146494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.146994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.147031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.147052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.147331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.147627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.147654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.147672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.151873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.161041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.161587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.161628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.161649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.161929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.162213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.162240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.162258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.166463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.175626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.176200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.176237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.176258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.176549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.176835] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.176863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.176881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.181084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.190251] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.190799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.190836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.190858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.191144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.191440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.191467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.191485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.195682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.204844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.205339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.205375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.205396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.205689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.205974] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.206001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.206018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.210228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.219390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.219928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.219965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.219986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.220265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.220563] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.220591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.220608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.224801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.233966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.234572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.657 [2024-07-24 19:21:32.234608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.657 [2024-07-24 19:21:32.234629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.657 [2024-07-24 19:21:32.234909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.657 [2024-07-24 19:21:32.235193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.657 [2024-07-24 19:21:32.235220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.657 [2024-07-24 19:21:32.235245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.657 [2024-07-24 19:21:32.239458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.657 [2024-07-24 19:21:32.248621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.657 [2024-07-24 19:21:32.249126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.249163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.249183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.658 [2024-07-24 19:21:32.249475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.658 [2024-07-24 19:21:32.249760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.658 [2024-07-24 19:21:32.249787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.658 [2024-07-24 19:21:32.249805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.658 [2024-07-24 19:21:32.253998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.658 [2024-07-24 19:21:32.263157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.658 [2024-07-24 19:21:32.263698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.263734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.263755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.658 [2024-07-24 19:21:32.264034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.658 [2024-07-24 19:21:32.264318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.658 [2024-07-24 19:21:32.264346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.658 [2024-07-24 19:21:32.264362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.658 [2024-07-24 19:21:32.268575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.658 [2024-07-24 19:21:32.277771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.658 [2024-07-24 19:21:32.278355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.278393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.278414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.658 [2024-07-24 19:21:32.278707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.658 [2024-07-24 19:21:32.278992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.658 [2024-07-24 19:21:32.279019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.658 [2024-07-24 19:21:32.279036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.658 [2024-07-24 19:21:32.283248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.658 [2024-07-24 19:21:32.292411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.658 [2024-07-24 19:21:32.292971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.293020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.293042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.658 [2024-07-24 19:21:32.293322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.658 [2024-07-24 19:21:32.293620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.658 [2024-07-24 19:21:32.293648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.658 [2024-07-24 19:21:32.293666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.658 [2024-07-24 19:21:32.297860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.658 [2024-07-24 19:21:32.307036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.658 [2024-07-24 19:21:32.307572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.307609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.307630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.658 [2024-07-24 19:21:32.307909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.658 [2024-07-24 19:21:32.308193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.658 [2024-07-24 19:21:32.308220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.658 [2024-07-24 19:21:32.308238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.658 [2024-07-24 19:21:32.312449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.658 [2024-07-24 19:21:32.321616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.658 [2024-07-24 19:21:32.322147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.322184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.322205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.658 [2024-07-24 19:21:32.322497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.658 [2024-07-24 19:21:32.322782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.658 [2024-07-24 19:21:32.322810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.658 [2024-07-24 19:21:32.322827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.658 [2024-07-24 19:21:32.327023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.658 [2024-07-24 19:21:32.336184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.658 [2024-07-24 19:21:32.336728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.336765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.336786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.658 [2024-07-24 19:21:32.337065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.658 [2024-07-24 19:21:32.337358] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.658 [2024-07-24 19:21:32.337385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.658 [2024-07-24 19:21:32.337403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.658 [2024-07-24 19:21:32.341608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.658 [2024-07-24 19:21:32.350775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.658 [2024-07-24 19:21:32.351386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.658 [2024-07-24 19:21:32.351452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.658 [2024-07-24 19:21:32.351479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.919 [2024-07-24 19:21:32.351778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.919 [2024-07-24 19:21:32.352068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.919 [2024-07-24 19:21:32.352096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.919 [2024-07-24 19:21:32.352114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.919 [2024-07-24 19:21:32.356319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.919 [2024-07-24 19:21:32.365518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.919 [2024-07-24 19:21:32.365977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.919 [2024-07-24 19:21:32.366016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.919 [2024-07-24 19:21:32.366036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.919 [2024-07-24 19:21:32.366317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.919 [2024-07-24 19:21:32.366615] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.919 [2024-07-24 19:21:32.366644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.919 [2024-07-24 19:21:32.366661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.919 [2024-07-24 19:21:32.370861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.919 [2024-07-24 19:21:32.380067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.919 [2024-07-24 19:21:32.380578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.919 [2024-07-24 19:21:32.380616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.919 [2024-07-24 19:21:32.380637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.919 [2024-07-24 19:21:32.380917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.919 [2024-07-24 19:21:32.381201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.919 [2024-07-24 19:21:32.381228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.919 [2024-07-24 19:21:32.381245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.919 [2024-07-24 19:21:32.385524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.919 [2024-07-24 19:21:32.397646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.919 [2024-07-24 19:21:32.398450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.919 [2024-07-24 19:21:32.398508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.919 [2024-07-24 19:21:32.398530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.919 [2024-07-24 19:21:32.398885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.919 [2024-07-24 19:21:32.399355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.919 [2024-07-24 19:21:32.399408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.919 [2024-07-24 19:21:32.399463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.919 [2024-07-24 19:21:32.407602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.919 [2024-07-24 19:21:32.416689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.919 [2024-07-24 19:21:32.417532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.919 [2024-07-24 19:21:32.417604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.919 [2024-07-24 19:21:32.417644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.919 [2024-07-24 19:21:32.418179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.919 [2024-07-24 19:21:32.418633] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.919 [2024-07-24 19:21:32.418664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.919 [2024-07-24 19:21:32.418683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.919 [2024-07-24 19:21:32.424851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.919 [2024-07-24 19:21:32.434397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.919 [2024-07-24 19:21:32.435094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.919 [2024-07-24 19:21:32.435165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.919 [2024-07-24 19:21:32.435204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.919 [2024-07-24 19:21:32.435639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.919 [2024-07-24 19:21:32.436175] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.919 [2024-07-24 19:21:32.436229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.919 [2024-07-24 19:21:32.436263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.919 [2024-07-24 19:21:32.443294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.919 [2024-07-24 19:21:32.451963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.919 [2024-07-24 19:21:32.452735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.919 [2024-07-24 19:21:32.452805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.919 [2024-07-24 19:21:32.452857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.919 [2024-07-24 19:21:32.453396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.919 [2024-07-24 19:21:32.453829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.919 [2024-07-24 19:21:32.453883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.919 [2024-07-24 19:21:32.453918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.461065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.470640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.471449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.471520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.471560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.472095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.472666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.472720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.472753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.480852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.489306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.489961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.490034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.490074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.490589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.491055] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.491107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.491141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.498261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.507738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.508558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.508631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.508670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.509207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.509664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.509701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.509741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.516856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.525928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.526804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.526875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.526914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.527474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.527870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.527923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.527958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.535033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.544809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.545747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.545820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.545862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.546398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.546862] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.546916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.546950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.554064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.562554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.563407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.563496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.563544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.563959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.564529] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.564558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.564577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.571630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.580879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.581764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.581836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.581877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.582413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.582987] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.583040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.583073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.590515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.920 [2024-07-24 19:21:32.598796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.920 [2024-07-24 19:21:32.599501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.920 [2024-07-24 19:21:32.599540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:26.920 [2024-07-24 19:21:32.599562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:26.920 [2024-07-24 19:21:32.600016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:26.920 [2024-07-24 19:21:32.600565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.920 [2024-07-24 19:21:32.600595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.920 [2024-07-24 19:21:32.600614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.920 [2024-07-24 19:21:32.607656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.180 [2024-07-24 19:21:32.615687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.180 [2024-07-24 19:21:32.616403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.180 [2024-07-24 19:21:32.616503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.180 [2024-07-24 19:21:32.616526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.180 [2024-07-24 19:21:32.616932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.180 [2024-07-24 19:21:32.617494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.180 [2024-07-24 19:21:32.617523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.180 [2024-07-24 19:21:32.617542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.180 [2024-07-24 19:21:32.624118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.180 [2024-07-24 19:21:32.634314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.180 [2024-07-24 19:21:32.635059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.180 [2024-07-24 19:21:32.635129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.180 [2024-07-24 19:21:32.635169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.180 [2024-07-24 19:21:32.635653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.180 [2024-07-24 19:21:32.636162] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.180 [2024-07-24 19:21:32.636215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.180 [2024-07-24 19:21:32.636249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.180 [2024-07-24 19:21:32.643391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.180 [2024-07-24 19:21:32.652645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.180 [2024-07-24 19:21:32.653481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.180 [2024-07-24 19:21:32.653552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.653592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.654129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.654623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.654654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.654672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.661644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.670695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.671593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.671664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.671704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.672240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.672669] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.672731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.672765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.679278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.689320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.690106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.690177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.690217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.690666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.691190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.691242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.691290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.698422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.707536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.708425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.708516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.708539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.708950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.709537] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.709567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.709586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.716618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.725773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.726644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.726714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.726754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.727291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.727712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.727783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.727817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.734946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.744057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.744915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.744986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.745027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.745564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.746002] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.746055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.746088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.753222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.762159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.762914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.762984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.763024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.763562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.763996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.764051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.764085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.771223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.780403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.781324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.781397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.781461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.781868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.782415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.782488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.782526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.789557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.798183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.798909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.798981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.799022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.799418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.799834] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.799888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.799922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.806869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.816058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.816806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.181 [2024-07-24 19:21:32.816877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.181 [2024-07-24 19:21:32.816918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.181 [2024-07-24 19:21:32.817503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.181 [2024-07-24 19:21:32.817877] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.181 [2024-07-24 19:21:32.817937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.181 [2024-07-24 19:21:32.817971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.181 [2024-07-24 19:21:32.824904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.181 [2024-07-24 19:21:32.833632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.181 [2024-07-24 19:21:32.834333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.182 [2024-07-24 19:21:32.834403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.182 [2024-07-24 19:21:32.834473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.182 [2024-07-24 19:21:32.834833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.182 [2024-07-24 19:21:32.835381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.182 [2024-07-24 19:21:32.835521] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.182 [2024-07-24 19:21:32.835548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.182 [2024-07-24 19:21:32.842650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.182 [2024-07-24 19:21:32.851113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.182 [2024-07-24 19:21:32.851842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.182 [2024-07-24 19:21:32.851913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.182 [2024-07-24 19:21:32.851953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.182 [2024-07-24 19:21:32.852516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.182 [2024-07-24 19:21:32.853034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.182 [2024-07-24 19:21:32.853088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.182 [2024-07-24 19:21:32.853122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.182 [2024-07-24 19:21:32.859866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.182 [2024-07-24 19:21:32.866104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.182 [2024-07-24 19:21:32.866586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.182 [2024-07-24 19:21:32.866626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.182 [2024-07-24 19:21:32.866647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.182 [2024-07-24 19:21:32.866939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.182 [2024-07-24 19:21:32.867237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.182 [2024-07-24 19:21:32.867265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.182 [2024-07-24 19:21:32.867290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.182 [2024-07-24 19:21:32.871735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1772990 Killed "${NVMF_APP[@]}" "$@" 00:29:27.182 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:27.182 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:27.182 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.182 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:27.182 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1773981 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1773981 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1773981 ']' 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:27.442 19:21:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.442 [2024-07-24 19:21:32.880999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.442 [2024-07-24 19:21:32.881476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.442 [2024-07-24 19:21:32.881515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.442 [2024-07-24 19:21:32.881540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.442 [2024-07-24 19:21:32.881837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.442 [2024-07-24 19:21:32.882135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.442 [2024-07-24 19:21:32.882164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.442 [2024-07-24 19:21:32.882182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.442 [2024-07-24 19:21:32.886591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.442 [2024-07-24 19:21:32.895970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.442 [2024-07-24 19:21:32.896462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.442 [2024-07-24 19:21:32.896516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.442 [2024-07-24 19:21:32.896537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.442 [2024-07-24 19:21:32.896839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.442 [2024-07-24 19:21:32.897139] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.442 [2024-07-24 19:21:32.897168] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.442 [2024-07-24 19:21:32.897194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.442 [2024-07-24 19:21:32.901702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.442 [2024-07-24 19:21:32.910870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.442 [2024-07-24 19:21:32.911340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.442 [2024-07-24 19:21:32.911378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.442 [2024-07-24 19:21:32.911399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.442 [2024-07-24 19:21:32.911741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.442 [2024-07-24 19:21:32.912041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.442 [2024-07-24 19:21:32.912069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.442 [2024-07-24 19:21:32.912088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.442 [2024-07-24 19:21:32.916608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.442 [2024-07-24 19:21:32.927935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.442 [2024-07-24 19:21:32.928685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.442 [2024-07-24 19:21:32.928739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.442 [2024-07-24 19:21:32.928761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.442 [2024-07-24 19:21:32.929302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.442 [2024-07-24 19:21:32.929721] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.442 [2024-07-24 19:21:32.929777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.442 [2024-07-24 19:21:32.929810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.442 [2024-07-24 19:21:32.930878] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:27.442 [2024-07-24 19:21:32.930973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.442 [2024-07-24 19:21:32.936411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.442 [2024-07-24 19:21:32.943719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.442 [2024-07-24 19:21:32.944319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.442 [2024-07-24 19:21:32.944368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.442 [2024-07-24 19:21:32.944398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:32.944765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:32.945268] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:32.945314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:32.945334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:32.952043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:32.959990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:32.960465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:32.960503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:32.960525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:32.960805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:32.961330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:32.961384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:32.961418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:32.966933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.443 [2024-07-24 19:21:32.975081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:32.975565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:32.975604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:32.975626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:32.975944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:32.976258] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:32.976299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:32.976330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:32.980944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:32.989991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:32.990511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:32.990548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:32.990570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:32.990873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:32.991171] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:32.991200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:32.991219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:32.995747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:33.004824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:33.005323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:33.005370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:33.005393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:33.005696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:33.005994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:33.006023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:33.006042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:33.010465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:33.018182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:27.443 [2024-07-24 19:21:33.019538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:33.020037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:33.020076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:33.020099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:33.020392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:33.020699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:33.020729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:33.020747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:33.025153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:33.034438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:33.035068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:33.035118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:33.035143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:33.035456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:33.035760] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:33.035791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:33.035812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:33.040218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:33.049318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:33.049837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:33.049877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:33.049899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:33.050205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:33.050516] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:33.050547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:33.050569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:33.054965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:33.064048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:33.064554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:33.064593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:33.064616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:33.064913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:33.065211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:33.065240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:33.065258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:33.069663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:33.079000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:33.079501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:33.079540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:33.079562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.443 [2024-07-24 19:21:33.079855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.443 [2024-07-24 19:21:33.080153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.443 [2024-07-24 19:21:33.080182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.443 [2024-07-24 19:21:33.080203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.443 [2024-07-24 19:21:33.084608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.443 [2024-07-24 19:21:33.093949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.443 [2024-07-24 19:21:33.094503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.443 [2024-07-24 19:21:33.094546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.443 [2024-07-24 19:21:33.094569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.444 [2024-07-24 19:21:33.094868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.444 [2024-07-24 19:21:33.095169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.444 [2024-07-24 19:21:33.095198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.444 [2024-07-24 19:21:33.095235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.444 [2024-07-24 19:21:33.099653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.444 [2024-07-24 19:21:33.108736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.444 [2024-07-24 19:21:33.109256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.444 [2024-07-24 19:21:33.109300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.444 [2024-07-24 19:21:33.109324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.444 [2024-07-24 19:21:33.109635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.444 [2024-07-24 19:21:33.109936] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.444 [2024-07-24 19:21:33.109966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.444 [2024-07-24 19:21:33.109986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.444 [2024-07-24 19:21:33.114407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.444 [2024-07-24 19:21:33.123492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.444 [2024-07-24 19:21:33.123994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.444 [2024-07-24 19:21:33.124033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.444 [2024-07-24 19:21:33.124055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.444 [2024-07-24 19:21:33.124349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.444 [2024-07-24 19:21:33.124662] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.444 [2024-07-24 19:21:33.124692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.444 [2024-07-24 19:21:33.124712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.444 [2024-07-24 19:21:33.129103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.704 [2024-07-24 19:21:33.138388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.138921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.138960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.138982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.139274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.139584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.139614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.139633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.144087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.153164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.153698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.153751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.153775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.154067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.154366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.154395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.154414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.158182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.705 [2024-07-24 19:21:33.158228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.705 [2024-07-24 19:21:33.158248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.705 [2024-07-24 19:21:33.158264] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.705 [2024-07-24 19:21:33.158278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.705 [2024-07-24 19:21:33.158406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.705 [2024-07-24 19:21:33.158681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.705 [2024-07-24 19:21:33.158691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.705 [2024-07-24 19:21:33.158882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.167967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.168530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.168578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.168602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.168902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.169203] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.169232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.169252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.173661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.182792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.183390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.183448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.183476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.183779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.184081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.184110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.184141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.188551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.197643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.198289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.198344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.198370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.198684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.198987] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.199017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.199038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.203443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.212559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.213231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.213294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.213320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.213639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.213942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.213971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.213993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.218384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.227466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.228099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.228144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.228169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.228482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.228782] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.228811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.228832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.233217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.242315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.243026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.243100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.243126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.243443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.243746] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.243776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.243797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.248191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.257271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.257938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.257984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.258019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.258320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.705 [2024-07-24 19:21:33.258632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.705 [2024-07-24 19:21:33.258663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.705 [2024-07-24 19:21:33.258684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.705 [2024-07-24 19:21:33.263076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.705 [2024-07-24 19:21:33.272166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.705 [2024-07-24 19:21:33.272759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.705 [2024-07-24 19:21:33.272800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.705 [2024-07-24 19:21:33.272822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.705 [2024-07-24 19:21:33.273118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.273416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.273455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.273476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 [2024-07-24 19:21:33.277863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 [2024-07-24 19:21:33.286994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 [2024-07-24 19:21:33.287549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.287594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.287616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.287909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.288219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.288248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.288267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 [2024-07-24 19:21:33.292691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:27.706 [2024-07-24 19:21:33.301765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.706 [2024-07-24 19:21:33.302400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.302476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.302503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.302803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.303103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.303132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.303150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 [2024-07-24 19:21:33.307555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 [2024-07-24 19:21:33.316665] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 [2024-07-24 19:21:33.317219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.317261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.317283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.317611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.317924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.317954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.317973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 [2024-07-24 19:21:33.322370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.706 [2024-07-24 19:21:33.331459] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.706 [2024-07-24 19:21:33.331962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.332008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.332032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.332326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.332636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.332666] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.332684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 [2024-07-24 19:21:33.335704] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.706 [2024-07-24 19:21:33.337086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 [2024-07-24 19:21:33.348553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 [2024-07-24 19:21:33.349193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.349246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.349271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.349586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.349886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.349915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.349934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.706 [2024-07-24 19:21:33.354322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 [2024-07-24 19:21:33.363400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 [2024-07-24 19:21:33.364074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.364122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.364155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.364490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.364802] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.364832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.364853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 [2024-07-24 19:21:33.369358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 [2024-07-24 19:21:33.378293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 [2024-07-24 19:21:33.378973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.379031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.379074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.379363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 Malloc0 00:29:27.706 [2024-07-24 19:21:33.379680] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.706 [2024-07-24 19:21:33.379711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.706 [2024-07-24 19:21:33.379733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.706 [2024-07-24 19:21:33.384141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.706 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.706 [2024-07-24 19:21:33.393128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.706 [2024-07-24 19:21:33.393703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.706 [2024-07-24 19:21:33.393742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d3540 with addr=10.0.0.2, port=4420 00:29:27.706 [2024-07-24 19:21:33.393765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d3540 is same with the state(5) to be set 00:29:27.706 [2024-07-24 19:21:33.394058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d3540 (9): Bad file descriptor 00:29:27.706 [2024-07-24 19:21:33.394416] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.707 [2024-07-24 19:21:33.394481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.707 [2024-07-24 19:21:33.394500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.707 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.707 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.707 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.707 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.707 [2024-07-24 19:21:33.399029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.707 [2024-07-24 19:21:33.399330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.966 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.966 19:21:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1773277 00:29:27.966 [2024-07-24 19:21:33.408274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.966 [2024-07-24 19:21:33.459737] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:37.995 00:29:37.995 Latency(us) 00:29:37.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.995 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:37.995 Verification LBA range: start 0x0 length 0x4000 00:29:37.995 Nvme1n1 : 15.02 4918.53 19.21 5712.52 0.00 12004.16 1134.74 30292.20 00:29:37.995 =================================================================================================================== 00:29:37.995 Total : 4918.53 19.21 5712.52 0.00 12004.16 1134.74 30292.20 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.995 rmmod nvme_tcp 00:29:37.995 rmmod nvme_fabrics 00:29:37.995 rmmod nvme_keyring 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1773981 ']' 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1773981 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1773981 ']' 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1773981 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773981 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773981' 00:29:37.995 killing process with pid 1773981 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1773981 00:29:37.995 19:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1773981 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.995 19:21:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.902 00:29:39.902 real 0m24.068s 00:29:39.902 user 1m2.797s 00:29:39.902 sys 0m5.223s 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.902 ************************************ 00:29:39.902 END TEST nvmf_bdevperf 00:29:39.902 ************************************ 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.902 ************************************ 00:29:39.902 START TEST nvmf_target_disconnect 00:29:39.902 ************************************ 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:39.902 * Looking for test storage... 00:29:39.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.902 19:21:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:43.190 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:43.190 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:43.190 Found net devices under 0000:84:00.0: cvl_0_0 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:43.190 Found net devices under 0000:84:00.1: cvl_0_1 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.190 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:43.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:29:43.191 00:29:43.191 --- 10.0.0.2 ping statistics --- 00:29:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.191 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:29:43.191 00:29:43.191 --- 10.0.0.1 ping statistics --- 00:29:43.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.191 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:43.191 ************************************ 00:29:43.191 START TEST nvmf_target_disconnect_tc1 00:29:43.191 ************************************ 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.191 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.191 [2024-07-24 19:21:48.687065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.191 [2024-07-24 19:21:48.687224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe2b790 with addr=10.0.0.2, port=4420 00:29:43.191 [2024-07-24 19:21:48.687309] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:43.191 [2024-07-24 19:21:48.687379] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:43.191 [2024-07-24 19:21:48.687414] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:43.191 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:43.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:43.191 Initializing NVMe Controllers 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:43.191 00:29:43.191 real 0m0.187s 00:29:43.191 user 0m0.072s 00:29:43.191 sys 0m0.113s 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:43.191 ************************************ 00:29:43.191 END TEST nvmf_target_disconnect_tc1 00:29:43.191 ************************************ 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:43.191 ************************************ 00:29:43.191 START TEST nvmf_target_disconnect_tc2 00:29:43.191 ************************************ 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1777238 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1777238 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1777238 ']' 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:43.191 19:21:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.191 [2024-07-24 19:21:48.839910] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:43.191 [2024-07-24 19:21:48.840004] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.191 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.451 [2024-07-24 19:21:48.952997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:43.710 [2024-07-24 19:21:49.178997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.710 [2024-07-24 19:21:49.179108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.710 [2024-07-24 19:21:49.179146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.710 [2024-07-24 19:21:49.179177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.710 [2024-07-24 19:21:49.179202] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.710 [2024-07-24 19:21:49.179374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:43.710 [2024-07-24 19:21:49.179472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:43.710 [2024-07-24 19:21:49.179532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:43.710 [2024-07-24 19:21:49.179536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.710 Malloc0 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.710 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.710 [2024-07-24 19:21:49.402754] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.970 [2024-07-24 19:21:49.431250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1777275 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:43.970 19:21:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:43.970 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.878 19:21:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1777238 00:29:45.878 19:21:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 [2024-07-24 19:21:51.458908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Write completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 [2024-07-24 19:21:51.459608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.878 Read completed with error (sct=0, sc=8) 00:29:45.878 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 [2024-07-24 19:21:51.460166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Read completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 Write completed with error (sct=0, sc=8) 00:29:45.879 starting I/O failed 00:29:45.879 [2024-07-24 19:21:51.460642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:45.879 [2024-07-24 19:21:51.460862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.460920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.461203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.461272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.461487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.461524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.461696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.461746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.462000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.462066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.462271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.462336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.462608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.462644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.462900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.462965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.463222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.463286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.463577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.463612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.463804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.463870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.464131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.464196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.464460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.464513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.464706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.464783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.465094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.465159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.879 qpair failed and we were unable to recover it. 00:29:45.879 [2024-07-24 19:21:51.465459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.879 [2024-07-24 19:21:51.465514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.465657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.465692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.465978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.466062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.466332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.466397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.466612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.466646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.466900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.466966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.467245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.467310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.467518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.467554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.467758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.467823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.468100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.468137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.468333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.468399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.468645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.468680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.468959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.468995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.469206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.469270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.469539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.469574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.469826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.469890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.470142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.470207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.470489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.470526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.470768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.470825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.471100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.471164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.471423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.471502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.471666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.471703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.471894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.471959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.472220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.472285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.472520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.472557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.472745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.472811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.473046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.473111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.473418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.473499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.473681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.473753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.474038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.474104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.474416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.474497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.474696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.474762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.475008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.475073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.475314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.475379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.475626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.475663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.475941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.880 [2024-07-24 19:21:51.476006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.880 qpair failed and we were unable to recover it. 00:29:45.880 [2024-07-24 19:21:51.476270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.476336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.476592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.476630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.476850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.476915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.477160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.477226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.477488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.477525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.477720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.477786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.478020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.478062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.478279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.478345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.478631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.478669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.478846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.478882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.479100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.479165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.479408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.479493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.479734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.479770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.479943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.480007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.480279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.480345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.480570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.480607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.480822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.480887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.481148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.481213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.481455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.481493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.481659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.481725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.482027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.482091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.482329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.482366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.482595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.482662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.482938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.483003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.483252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.483288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.483476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.483544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.483803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.483869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.484131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.484167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.484400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.484485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.484710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.484775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.485054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.485090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.485331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.485395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.485720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.485787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.486062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.486098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.486347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.486413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.486683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.486751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.486961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.486997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.881 [2024-07-24 19:21:51.487188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.881 [2024-07-24 19:21:51.487254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.881 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.487514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.487582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.487845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.487881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.488078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.488143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.488411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.488492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.488763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.488799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.489001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.489066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.489279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.489344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.489641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.489678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.489943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.490018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.490283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.490349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.490598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.490635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.490826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.490891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.491154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.491219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.491494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.491531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.491741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.491807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.492012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.492077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.492310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.492346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.492513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.492579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.492812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.492877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.493113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.493149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.493349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.493414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.493718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.493783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.494034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.494070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.494235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.494300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.494603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.494640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.494844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.494880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.495126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.495191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.495474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.495540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.495781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.495817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.496040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.496105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.496369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.496448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.496693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.496729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.496948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.497014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.497275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.497340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.497632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.882 [2024-07-24 19:21:51.497669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.882 qpair failed and we were unable to recover it. 00:29:45.882 [2024-07-24 19:21:51.497895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.497961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.498229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.498294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.498531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.498568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.498779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.498845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.499110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.499174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.499458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.499495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.499748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.499813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.500075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.500140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.500396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.500439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.500678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.500743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.501015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.501078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.501350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.501386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.501641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.501708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.501947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.502022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.502257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.502322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.502598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.502634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.502868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.502933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.503163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.503198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.503396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.503489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.503716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.503781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.504044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.504080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.504322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.504386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.504674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.504738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.505005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.505042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.505270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.505334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.505610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.505647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.505793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.505828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.506031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.506096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.506363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.506448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.883 [2024-07-24 19:21:51.506735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.883 [2024-07-24 19:21:51.506771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.883 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.506991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.507056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.507255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.507319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.507588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.507626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.507819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.507884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.508145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.508210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.508456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.508493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.508712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.508778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.509028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.509093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.509354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.509390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.509653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.509721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.510020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.510085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.510321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.510357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.510573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.510639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.510905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.510969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.511172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.511208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.511403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.511484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.511747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.511811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.512072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.512108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.512370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.512452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.512682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.512748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.513023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.513059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.513297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.513362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.513644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.513680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.513823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.513865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.514064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.514130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.514395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.514480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.514726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.514762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.514972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.515038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.515275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.515340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.515574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.515611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.515823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.515888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.516123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.516187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.516426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.516473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.516735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.516799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.517029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.884 [2024-07-24 19:21:51.517094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.884 qpair failed and we were unable to recover it. 00:29:45.884 [2024-07-24 19:21:51.517324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.517360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.517559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.517629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.517872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.517952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.518192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.518228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.518460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.518528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.518770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.518835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.519087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.519123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.519340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.519405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.519669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.519735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.519995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.520031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.520249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.520315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.520553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.520620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.520838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.520875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.521035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.521099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.521360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.521426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.521678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.521714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.521962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.522027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.522312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.522377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.522653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.522689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.522929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.522994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.523260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.523325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.523573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.523610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.523820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.523886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.524094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.524159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.524385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.524421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.524637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.524703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.524965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.525031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.525280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.525316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.525510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.525591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.525864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.525928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.526231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.526267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.526540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.526607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.526845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.526911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.527139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.527175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.527390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.527472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.527712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.527778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.528042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.885 [2024-07-24 19:21:51.528078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.885 qpair failed and we were unable to recover it. 00:29:45.885 [2024-07-24 19:21:51.528316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.528380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.528627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.528663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.528838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.528874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.529060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.529125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.529386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.529471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.529750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.529786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.529996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.530061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.530321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.530385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.530684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.530721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.530976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.531042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.531313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.531377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.531657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.531694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.531888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.531953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.532215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.532279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.532525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.532563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.532729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.532765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.532962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.532998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.533166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.533203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.533351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.533387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.533567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.533604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.533792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.533857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.534117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.534182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.534496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.534533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.534714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.534780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.535063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.535128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.535373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.535453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.535642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.535712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.535952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.536016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.536305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.536370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.536580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.536617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.536825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.536889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.537153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.537189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.537444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.537514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.537681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.537760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.538074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.538110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.538317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.538382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.538653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.886 [2024-07-24 19:21:51.538716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.886 qpair failed and we were unable to recover it. 00:29:45.886 [2024-07-24 19:21:51.538940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.538976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.539180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.539244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.539483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.539520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.539668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.539704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.539889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.539954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.540225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.540289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.540600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.540637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.540903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.540969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.541250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.541316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.541567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.541603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.541782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.541847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.542127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.542190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.542461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.542497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.542724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.542789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.543030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.543094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.543359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.543424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.543635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.543701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.543954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.544018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.544286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.544351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.544609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.544645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.544875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.544940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.545157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.545232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.545470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.545526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.545749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.545813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.546045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.546081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.546291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.546355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.546586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.546623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.546803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.546839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.547045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.547109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.547343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.547408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.547691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.547728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.547990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.548055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.548326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.548391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.548677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.887 [2024-07-24 19:21:51.548713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.887 qpair failed and we were unable to recover it. 00:29:45.887 [2024-07-24 19:21:51.548952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.549017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.549269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.549333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.549615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.549651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.549921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.549986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.550235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.550300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.550569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.550606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.550861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.550926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.551203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.551268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.551514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.551550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.551763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.551828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.552055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.552120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.552373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.552452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.552657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.552727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.552919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.552984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.553253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.553289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.553505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.553572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.553847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.553912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.554180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.554216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.554444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.554512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.554742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.554805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.555072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.555108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.555317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.555382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.555628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.555693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.555908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.555945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.556172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.556236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.556513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.556580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.556848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.556884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.557067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.557143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.557423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.557665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.888 [2024-07-24 19:21:51.557701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.888 qpair failed and we were unable to recover it. 00:29:45.888 [2024-07-24 19:21:51.557929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.557995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.558269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.558334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.558576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.558613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.558787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.558852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.559091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.559156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.559426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.559471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.559672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.559737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.559948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.560012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.560248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.560283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.560482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.560548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.560823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.560889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.561165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.561201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.561443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.561510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.561739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.561804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.562038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.562074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.562286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.562352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.562626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.562663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.562829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.562866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.563084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.563119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.563291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.563356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.563605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.563640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.563827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.563892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.564153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.564219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.564528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.564566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.564852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.564887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.565076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.565154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.565472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.565508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.565771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.565806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.565991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.566025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.566183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.566219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.566400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.566480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.566697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.566761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.566995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.567030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.567263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.567329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.567577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.567613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.567787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.567821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.889 [2024-07-24 19:21:51.568003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.889 [2024-07-24 19:21:51.568037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.889 qpair failed and we were unable to recover it. 00:29:45.890 [2024-07-24 19:21:51.568227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:45.890 [2024-07-24 19:21:51.568301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:45.890 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.568576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.568613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.568791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.568855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.569095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.569160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.569423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.569468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.569651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.569685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.569871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.569936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.570156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.570190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.570379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.570457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.570695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.570729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.570932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.570983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.571215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.571280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.571493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.571527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.571712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.571748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.571934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.571999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.572239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.572303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.572517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.572553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.572739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.572803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.573062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.573126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.573403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.573447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.573709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.573775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.574012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.574076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.574353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.574417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.574657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.574725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.574986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.575051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.575308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.575374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.575613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.575648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.575889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.575955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.576228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.576264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.576483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.576549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.576814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.576879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.577164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.577200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.163 [2024-07-24 19:21:51.577387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.163 [2024-07-24 19:21:51.577466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.163 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.577711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.577775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.578087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.578123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.578365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.578454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.578697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.578762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.579010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.579046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.579249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.579313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.579575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.579641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.579849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.579890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.580124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.580188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.580424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.580506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.580742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.580778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.580957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.581021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.581256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.581320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.581567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.581603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.581795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.581860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.582094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.582159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.582466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.582503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.582718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.582783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.583021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.583086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.583341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.583405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.583636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.583690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.583980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.584044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.584338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.584403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.584713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.584778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.585040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.585104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.585337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.585372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.585601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.585667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.585871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.585936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.586213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.586248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.586467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.586532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.586733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.586798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.587030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.587064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.587258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.587293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.587467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.587547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.587836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.587872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.588055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.588098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.164 [2024-07-24 19:21:51.588295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.164 [2024-07-24 19:21:51.588328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.164 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.588496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.588533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.588721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.588787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.588995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.589059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.589267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.589301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.589508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.589544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.589774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.589839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.590102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.590137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.590326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.590392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.590684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.590749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.590959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.590995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.591189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.591264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.591545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.591612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.591852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.591888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.592115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.592179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.592482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.592548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.592820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.592856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.593067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.593132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.593402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.593483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.593727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.593762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.593997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.594062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.594313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.594378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.594626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.594662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.594871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.594937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.595200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.595265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.595550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.595586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.595780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.595845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.596117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.596182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.596448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.596485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.596719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.596783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.596981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.597045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.597330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.597394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.597628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.597664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.597942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.598006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.598260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.598325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.598556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.598593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.598775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.598839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.599130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.599165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.165 [2024-07-24 19:21:51.599457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.165 [2024-07-24 19:21:51.599523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.165 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.599796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.599861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.600094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.600131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.600323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.600388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.600668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.600735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.601011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.601048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.601259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.601325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.601583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.601621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.601801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.601837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.602066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.602131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.602359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.602424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.602721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.602757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.602961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.603026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.603272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.603347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.603636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.603672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.603865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.603930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.604189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.604254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.604522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.604559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.604796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.604861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.605068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.605132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.605400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.605443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.605655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.605720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.606000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.606064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.606343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.606379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.606633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.606700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.606965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.607029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.607268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.607303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.607467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.607533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.607802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.607866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.608108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.608143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.608356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.608420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.608724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.608788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.609029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.609065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.609287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.609350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.609630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.609666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.609847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.609882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.610068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.610131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.610375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.610457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.166 [2024-07-24 19:21:51.610708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.166 [2024-07-24 19:21:51.610744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.166 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.610953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.611016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.611308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.611372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.611651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.611688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.611905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.611970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.612207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.612271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.612516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.612553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.612747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.612812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.613081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.613146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.613358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.613394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.613600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.613666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.613936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.614002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.614211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.614246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.614463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.614529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.614779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.614843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.615110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.615362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.615440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.615660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.615725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.615925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.615961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.616158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.616225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.616480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.616545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.616795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.616831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.617012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.617076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.617306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.617371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.617639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.617676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.617870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.617908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.618117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.618182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.618477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.618513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.618684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.618763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.619023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.619090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.167 [2024-07-24 19:21:51.619342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.167 [2024-07-24 19:21:51.619408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.167 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.619633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.619669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.619954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.620019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.620258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.620294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.620490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.620557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.620823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.620888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.621150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.621186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.621402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.621482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.621755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.621821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.622097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.622133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.622339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.622404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.622661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.622727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.623000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.623036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.623272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.623337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.623595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.623631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.623812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.623848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.624070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.624136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.624407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.624510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.624753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.624789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.624978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.625043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.625302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.625367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.625625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.625661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.625850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.625916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.626180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.626245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.626484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.626521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.626689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.626764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.626995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.627060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.627318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.627354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.627530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.627595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.627804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.627870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.628114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.628150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.628366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.628481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.628776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.628842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.629118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.629154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.629368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.629450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.629696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.629762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.630029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.630065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.630296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.630360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.168 [2024-07-24 19:21:51.630629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.168 [2024-07-24 19:21:51.630666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.168 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.630927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.630963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.631187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.631253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.631490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.631557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.631787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.631822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.632004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.632069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.632298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.632363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.632619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.632656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.632865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.632929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.633181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.633246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.633495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.633531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.633695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.633761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.634030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.634096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.634345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.634410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.634665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.634717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.635044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.635082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.635352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.635417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.635621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.635657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.635850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.635884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.636093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.636157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.636381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.636475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.636687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.636724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.636926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.636992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.637239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.637303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.637540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.637576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.637728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.637805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.638027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.638092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.638302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.638338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.638498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.638535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.638663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.638736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.638958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.639029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.639261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.639323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.639554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.639591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.639774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.639836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.640129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.640194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.640521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.640557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.640751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.640796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.641032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.641078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.169 [2024-07-24 19:21:51.641299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.169 [2024-07-24 19:21:51.641334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.169 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.641529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.641565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.641732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.641777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.642008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.642042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.642221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.642266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.642506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.642542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.642717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.642752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.642938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.642982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.643181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.643226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.643400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.643443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.643589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.643623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.643825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.643870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.644095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.644130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.644338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.644383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.644632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.644668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.644819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.644854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.645012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.645085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.645325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.645389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.645700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.645735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.646029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.646074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.646314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.646359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.646611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.646646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.646845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.646878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.647083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.647116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.647288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.647322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.647476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.647511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.647687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.647719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.647901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.647934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.648115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.648178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.648451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.648517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.648750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.648785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.648980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.649043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.649308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.649372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.649636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.649672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.649879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.649944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.650215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.650278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.650528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.650563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.650771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.650835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.170 qpair failed and we were unable to recover it. 00:29:46.170 [2024-07-24 19:21:51.651109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.170 [2024-07-24 19:21:51.651173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.651467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.651522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.651718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.651786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.652069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.652133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.652406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.652495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.652680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.652751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.653017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.653080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.653366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.653457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.653656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.653708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.653978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.654042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.654304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.654339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.654543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.654589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.654788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.654852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.655060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.655095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.655312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.655375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.655617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.655653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.655839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.655874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.656109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.656173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.656472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.656526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.656718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.656753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.656971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.657035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.657255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.657320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.657550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.657586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.657812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.657876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.658122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.658186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.658456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.658492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.658674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.658761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.659023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.659087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.659356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.659390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.659598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.659644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.659888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.659952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.660213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.660248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.660499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.660546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.660760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.660823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.661017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.661051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.661274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.661337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.661600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.661636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.661776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.661811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.171 qpair failed and we were unable to recover it. 00:29:46.171 [2024-07-24 19:21:51.661971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.171 [2024-07-24 19:21:51.662035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.662316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.662378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.662661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.662697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.662944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.663007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.663272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.663335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.663588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.663624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.663844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.663907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.664199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.664262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.664513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.664549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.664772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.664835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.665100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.665163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.665378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.665413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.665624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.665686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.665968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.666031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.666262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.666296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.666514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.666559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.666763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.666827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.667074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.667108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.667256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.667320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.667583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.667619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.667804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.667843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.668036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.668098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.668327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.668392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.668680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.668714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.668912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.668975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.669254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.669317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.669567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.669603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.669763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.669827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.670061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.670125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.670355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.670390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.670598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.670644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.670884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.670947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.172 [2024-07-24 19:21:51.671209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.172 [2024-07-24 19:21:51.671243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.172 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.671492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.671539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.671784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.671848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.672083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.672117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.672332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.672395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.672669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.672744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.672984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.673019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.673249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.673313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.673597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.673643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.673866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.673901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.674090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.674154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.674476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.674523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.674756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.674790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.674980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.675044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.675312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.675375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.675666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.675701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.675928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.675991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.676263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.676327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.676614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.676649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.676845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.676909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.677186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.677250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.677532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.677568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.677787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.677852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.678113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.678176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.678412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.678500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.678723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.678787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.679068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.679131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.679404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.679447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.679671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.679766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.680046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.680109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.680354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.680388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.680605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.680651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.680908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.680971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.681243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.681277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.681512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.681548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.681772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.681835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.682096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.682131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.682349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.682412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.173 qpair failed and we were unable to recover it. 00:29:46.173 [2024-07-24 19:21:51.682706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.173 [2024-07-24 19:21:51.682770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.683034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.683068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.683295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.683359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.683626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.683661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.683869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.683904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.684143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.684206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.684466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.684531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.684794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.684829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.685006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.685071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.685305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.685368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.685662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.685699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.685926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.685989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.686233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.686297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.686555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.686591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.686787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.686850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.687077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.687140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.687409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.687452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.687691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.687753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.687999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.688062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.688332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.688367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.688541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.688605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.688837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.688900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.689168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.689203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.689449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.689514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.689795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.689859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.690122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.690156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.690375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.690456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.690731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.690794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.691056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.691091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.691288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.691352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.691609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.691650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.691862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.691896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.692130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.692193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.692461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.692527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.692773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.692807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.693018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.693082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.693345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.693409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.693711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.693746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.174 qpair failed and we were unable to recover it. 00:29:46.174 [2024-07-24 19:21:51.693978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.174 [2024-07-24 19:21:51.694042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.694280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.694344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.694615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.694650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.694860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.694923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.695186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.695250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.695489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.695524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.695701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.695765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.695999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.696063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.696324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.696359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.696576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.696641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.696911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.696975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.697255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.697290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.697479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.697544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.697809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.697872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.698133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.698168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.698377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.698455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.698726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.698789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.699050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.699084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.699326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.699389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.699674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.699726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.699967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.700002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.700161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.700225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.700471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.700537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.700783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.700818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.701037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.701101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.701332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.701396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.701707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.701743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.701951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.702015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.702249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.702311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.702585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.702621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.702853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.702918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.703181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.703245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.703508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.703549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.703759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.703823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.704092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.704156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.704425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.704469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.704649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.704713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.704976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.175 [2024-07-24 19:21:51.705039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.175 qpair failed and we were unable to recover it. 00:29:46.175 [2024-07-24 19:21:51.705301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.705336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.705524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.705589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.705854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.705918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.706180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.706215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.706404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.706486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.706750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.706813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.707044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.707079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.707298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.707362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.707609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.707644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.707854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.707889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.708052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.708116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.708351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.708413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.708697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.708732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.708951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.709015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.709248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.709312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.709616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.709654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.709883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.709928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.710158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.710205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.710459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.710497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.710715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.710763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.710998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.711063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.711335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.711371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.711610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.711676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.711952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.712001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.712233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.712269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.712448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.712497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.712721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.712767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.713007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.713044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.713234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.713269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.713411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.713465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.713750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.713787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.713979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.714016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.714160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.714197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.714406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.176 [2024-07-24 19:21:51.714488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.176 qpair failed and we were unable to recover it. 00:29:46.176 [2024-07-24 19:21:51.714708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.714769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.715010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.715074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.715355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.715426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.715670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.715749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.716025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.716075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.716309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.716355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.716541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.716588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.716768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.716814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.717027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.717063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.717284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.717319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.717500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.717538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.717688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.717724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.717922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.717967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.718164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.718210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.718452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.718489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.718644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.718690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.718857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.718904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.719109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.719146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.719357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.719453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.719694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.719730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.720008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.720044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.720266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.720312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.720478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.720526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.720706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.720741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.720931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.720968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.721166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.721202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.721380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.721416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.721625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.721709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.722003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.722050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.722264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.722304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.722514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.722562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.722788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.722841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.723067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.723102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.723294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.723376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.723618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.723685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.723935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.177 [2024-07-24 19:21:51.723976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.177 qpair failed and we were unable to recover it. 00:29:46.177 [2024-07-24 19:21:51.724181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.724218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.724357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.724426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.724683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.724718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.724887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.724935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.725151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.725231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.725490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.725527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.725733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.725799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.726070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.726117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.726358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.726404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.726630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.726686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.726924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.726973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.727198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.727235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.727449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.727513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.727753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.727820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.728075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.728113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.728302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.728337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.728488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.728534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.728707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.728744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.728943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.728993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.729208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.729284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.729520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.729559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.729775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.729840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.730107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.730180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.730436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.730474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.730652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.730689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.730933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.730978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.731185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.731221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.731425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.731475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.731659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.731726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.731996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.732037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.732196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.732232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.732374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.732409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.732626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.732662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.732836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.732891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.733138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.733203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.733467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.733506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.733743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.733808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.734042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.178 [2024-07-24 19:21:51.734117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.178 qpair failed and we were unable to recover it. 00:29:46.178 [2024-07-24 19:21:51.734346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.734393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.734615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.734656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.734900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.734945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.735147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.735191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.735373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.735408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.735601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.735638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.735845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.735886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.736146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.736193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.736359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.736405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.736623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.736660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.736866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.736912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.737121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.737169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.737369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.737405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.737608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.737655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.737898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.737963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.738179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.738220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.738441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.738478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.738626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.738664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.738881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.738916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.739142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.739190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.739416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.739509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.739708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.739744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.739924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.739989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.740258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.740310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.740530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.740567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.740754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.740809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.740993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.741038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.741242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.741282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.741486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.741523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.741661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.741698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.741874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.741911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.742154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.742205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.742443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.742499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.742661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.742698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.742906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.742951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.743150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.743226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.743441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.743478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.743648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.179 [2024-07-24 19:21:51.743690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.179 qpair failed and we were unable to recover it. 00:29:46.179 [2024-07-24 19:21:51.743980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.744045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.744324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.744361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.744496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.744533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.744746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.744802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.745010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.745046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.745255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.745305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.745575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.745642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.745928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.745965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.746199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.746274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.746519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.746572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.746819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.746854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.747053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.747099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.747337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.747388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.747630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.747667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.747876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.747946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.748246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.748282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.748493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.748539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.748693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.748732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.748919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.748966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.749203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.749239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.749454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.749490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.749646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.749681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.749897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.749934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.750201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.750266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.750524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.750572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.750742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.750778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.750967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.751014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.751189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.751235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.751454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.751492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.751636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.751671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.751849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.751886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.752087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.752122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.752389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.752452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.752668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.752714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.752960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.752996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.753192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.753240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.753476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.753524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.753701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.180 [2024-07-24 19:21:51.753737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.180 qpair failed and we were unable to recover it. 00:29:46.180 [2024-07-24 19:21:51.753939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.753976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.754142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.754220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.754486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.754524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.754663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.754699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.754865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.754912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.755137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.755172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.755365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.755401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.755578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.755614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.755792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.755828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.756071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.756136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.756443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.756499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.756682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.756718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.756927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.756975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.757130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.757175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.757380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.757416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.757575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.757611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.757825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.757861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.758100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.758136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.758301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.758337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.758481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.758527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.758705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.758742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.758916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.758961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.759175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.759222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.759466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.759502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.759660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.759734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.760002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.760066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.760302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.760338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.760525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.760561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.760774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.760810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.761045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.761088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.761274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.761311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.181 [2024-07-24 19:21:51.761490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.181 [2024-07-24 19:21:51.761533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.181 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.761762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.761798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.762095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.762150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.762332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.762378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.762575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.762619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.762807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.762853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.763096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.763146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.763380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.763496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.763715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.763788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.764031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.764096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.764355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.764455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.764671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.764707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.764852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.764895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.765076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.765112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.765305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.765355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.765557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.765593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.765757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.765793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.765976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.766040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.766340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.766426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.766656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.766698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.766851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.766894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.767111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.767157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.767403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.767465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.767609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.767645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.767798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.767837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.768004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.768039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.768252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.768336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.768537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.768586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.768785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.768825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.769041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.769087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.769291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.769340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.769546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.769583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.769756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.769792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.770015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.770080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.770348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.770387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.770594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.770631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.770797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.770851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.771058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.771093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.182 [2024-07-24 19:21:51.771254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.182 [2024-07-24 19:21:51.771305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.182 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.771637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.771674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.771852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.771888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.772091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.772156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.772453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.772509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.772692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.772728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.772906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.772966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.773135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.773181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.773424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.773474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.773628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.773663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.773832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.773868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.774059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.774095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.774330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.774378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.774582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.774618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.774806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.774842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.775033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.775089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.775337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.775384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.775664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.775701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.775916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.775982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.776246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.776320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.776553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.776590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.776769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.776821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.776992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.777037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.777259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.777296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.777499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.777536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.777688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.777730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.777934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.777970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.778218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.778269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.778524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.778571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.778808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.778845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.779056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.779103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.779354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.779402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.779653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.779688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.779870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.779938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.780195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.780260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.780524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.780561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.780751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.780786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.183 [2024-07-24 19:21:51.781004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.183 [2024-07-24 19:21:51.781063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.183 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.781307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.781354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.781570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.781607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.781809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.781846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.782011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.782047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.782239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.782318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.782546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.782594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.782828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.782865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.783055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.783100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.783299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.783350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.783594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.783630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.783800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.783848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.784100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.784166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.784395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.784442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.784630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.784666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.784830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.784866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.785079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.785115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.785321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.785367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.785592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.785630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.785802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.785839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.786019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.786083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.786336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.786384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.786580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.786617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.786844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.786881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.787101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.787147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.787393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.787456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.787621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.787657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.787854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.787890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.788103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.788175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.788454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.788503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.788708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.788754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.789000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.789036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.789264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.789321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.789521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.789604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.789850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.789921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.790218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.790258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.790466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.790533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.790757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.790803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-24 19:21:51.791009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.184 [2024-07-24 19:21:51.791063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.184 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.791284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.791320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.791514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.791567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.791809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.791875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.792108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.792186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.792447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.792493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.792671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.792758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.793026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.793072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.793273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.793329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.793539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.793576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.793774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.793831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.794037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.794083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.794280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.794326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.794506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.794549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.794716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.794753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.794959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.794994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.795199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.795259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.795507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.795551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.795701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.795742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.795934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.795978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.796175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.796221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.796415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.796479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.796619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.796654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.796833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.796877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.797064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.797129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.797380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.797420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.797603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.797639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.797828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.797865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.798063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.798109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.798347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.798383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.798553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.798590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.798815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.798890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.799148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.799225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.799506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.799543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.799705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.799741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.799976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.800022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.800253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.800299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.800549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.800586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-24 19:21:51.800786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.185 [2024-07-24 19:21:51.800822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.801069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.801134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.801404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.801493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.801692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.801730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.801946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.801983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.802164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.802210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.802400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.802470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.802664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.802699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.802909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.802945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.803231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.803297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.803549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.803603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.803833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.803869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.804027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.804064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.804306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.804352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.804557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.804595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.804823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.804875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.805109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.805174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.805452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.805540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.805722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.805771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.806004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.806046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.806249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.806295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.806494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.806542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.806749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.806797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.807025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.807061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.807271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.807337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.807564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.807641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.807913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.807961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.808164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.808207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.808413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.808476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.808642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.808703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.808900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.808947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.809124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.809161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.809361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.186 [2024-07-24 19:21:51.809475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-24 19:21:51.809696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.809761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.810027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.810074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.810289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.810325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.810545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.810594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.810802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.810853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.811079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.811126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.811364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.811400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.811571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.811607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.811827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.811912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.812105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.812153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.812386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.812426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.812611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.812658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.812848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.812901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.813094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.813141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.813416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.813474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.813716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.813780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.814031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.814110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.814356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.814404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.814606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.814650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.814841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.814886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.815048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.815100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.815309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.815356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.815608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.815654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.815878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.815942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.816170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.816249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.816512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.816560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.816789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.816830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.817059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.817103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.817298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.817345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.817564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.817601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.817756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.817792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.817974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.818010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.818162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.818198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.818475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.818522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.818724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.818767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.818940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.818975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.819189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.819225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.819411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.187 [2024-07-24 19:21:51.819496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.187 qpair failed and we were unable to recover it. 00:29:46.187 [2024-07-24 19:21:51.819707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.819744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.819929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.819972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.820132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.820205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.820492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.820574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.820830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.820867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.821041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.821078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.821264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.821309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.821506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.821543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.821736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.821781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.821959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.821996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.822217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.822282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.822525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.822594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.822810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.822846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.823021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.823057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.823264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.823310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.823516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.823573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.823802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.823838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.824021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.824098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.824358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.824422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.824664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.824713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.824969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.825004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.825163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.825223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.825424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.825504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.825689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.825737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.825946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.825994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.826246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.826310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.826542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.826622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.826806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.826852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.827114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.827149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.827345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.827390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.827571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.827619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.827824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.827887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.828145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.828181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.828396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.828491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.828663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.828716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.828960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.829006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.829245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.829281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.829515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.188 [2024-07-24 19:21:51.829563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.188 qpair failed and we were unable to recover it. 00:29:46.188 [2024-07-24 19:21:51.829844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.829892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.830151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.830217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.830519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.830556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.830716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.830805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.831041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.831088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.831320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.831370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.831546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.831583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.831741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.831794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.832022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.832070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.832357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.832424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.832637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.832674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.832857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.832932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.833202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.833248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.833475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.833535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.833751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.833787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.834018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.834075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.834310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.834375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.834603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.834644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.834868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.834904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.835112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.835164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.835408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.835473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.835645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.835697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.835918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.835954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.836178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.836245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.836497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.836545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.836741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.836814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.837070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.837112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.837310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.837387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.837579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.837626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.837855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.837913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.838148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.838184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.838378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.838448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.838625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.838704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.838965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.839044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.839299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.839334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.839526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.839620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.839843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.839890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.840128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.189 [2024-07-24 19:21:51.840167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.189 qpair failed and we were unable to recover it. 00:29:46.189 [2024-07-24 19:21:51.840371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.840404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.840575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.840632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.840877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.840942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.841194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.841276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.841515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.841550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.841745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.841812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.842032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.842078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.842281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.842315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.842509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.842544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.842694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.842757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.842942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.843018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.843282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.843362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.843538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.843573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.843797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.843869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.844082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.844129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.844349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.844383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.844556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.844591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.190 qpair failed and we were unable to recover it. 00:29:46.190 [2024-07-24 19:21:51.844812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.190 [2024-07-24 19:21:51.844866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.845101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.845165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.845384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.845495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.845642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.845678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.845885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.845961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.846211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.846245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.846383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.846424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.846637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.846671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.846881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.846938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.847160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.847194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.847375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.847412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.847573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.847631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.847852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.847887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.848029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.848063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.848229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.848264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.848502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.848554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.848720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2086b00 is same with the state(5) to be set 00:29:46.464 [2024-07-24 19:21:51.849012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.849076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.464 [2024-07-24 19:21:51.849350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.464 [2024-07-24 19:21:51.849400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:46.464 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.849651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.849702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.849948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.850002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.850218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.850264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.850499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.850537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.850733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.850779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.850975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.851023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.851266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.851302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.851525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.851582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.851828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.851894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.852186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.852223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.852362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.852397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.852565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.852602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.852817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.852852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.853089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.853152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.853390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.853501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.853656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.853691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.853873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.853939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.854183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.854248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.854512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.854548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.854706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.854765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.855028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.855093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.855415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.855508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.855645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.855707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.855902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.855965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.856261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.856324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.856560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.856597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.856783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.856847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.857117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.857152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.857388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.857510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.857657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.857692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.857845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.857880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.858061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.858125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.858382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.858480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.858642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.858678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.858882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.858946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.859198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.859261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.465 [2024-07-24 19:21:51.859489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.465 [2024-07-24 19:21:51.859526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.465 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.859735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.859799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.860064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.860128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.860372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.860455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.860622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.860657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.860879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.860942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.861209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.861244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.861506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.861543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.861780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.861845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.862079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.862113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.862312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.862386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.862603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.862638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.862846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.862881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.863122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.863185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.863395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.863490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.863634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.863668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.863847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.863911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.864156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.864220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.864514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.864550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.864720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.864783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.865009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.865073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.865347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.865411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.865652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.865697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.865955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.866019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.866291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.866326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.866529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.866595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.866870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.866935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.867199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.867234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.867449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.867512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.867666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.867741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.868003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.868038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.868272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.868335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.868607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.868644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.868856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.868891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.869125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.869188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.869453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.869507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.869652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.869687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.869865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.869930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.466 qpair failed and we were unable to recover it. 00:29:46.466 [2024-07-24 19:21:51.870221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.466 [2024-07-24 19:21:51.870284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.870538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.870574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.870785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.870850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.871080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.871143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.871453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.871518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.871712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.871776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.872006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.872069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.872353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.872416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.872695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.872759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.873030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.873094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.873376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.873471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.873646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.873681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.873954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.874027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.874290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.874325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.874542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.874609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.874877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.874941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.875207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.875242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.875454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.875520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.875764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.875828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.876091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.876127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.876327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.876390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.876633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.876668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.876916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.876951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.877126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.877195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.877472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.877507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.877691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.877726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.877954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.878019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.878298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.878362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.878646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.878683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.878929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.878993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.879257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.879321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.879571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.879607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.879823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.879887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.880122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.880186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.880451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.880504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.880724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.880789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.881043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.881107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.467 qpair failed and we were unable to recover it. 00:29:46.467 [2024-07-24 19:21:51.881381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.467 [2024-07-24 19:21:51.881474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.881665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.881701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.882000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.882064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.882331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.882366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.882588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.882654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.882916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.882980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.883232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.883267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.883486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.883522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.883721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.883785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.884061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.884096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.884332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.884395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.884623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.884658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.884860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.884895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.885131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.885194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.885440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.885499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.885674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.885714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.885930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.885994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.886253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.886318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.886563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.886599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.886834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.886899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.887142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.887206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.887464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.887523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.887725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.887759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.887950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.888015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.888277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.888341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.888620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.888653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.888797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.888831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.889096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.889161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.889423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.889502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.889692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.889742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.889947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.889997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.890232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.890295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.890558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.890623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.890911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.890947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.891152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.891216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.891480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.891546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.891795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.891830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.892006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.468 [2024-07-24 19:21:51.892069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.468 qpair failed and we were unable to recover it. 00:29:46.468 [2024-07-24 19:21:51.892288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.892352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.892612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.892647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.892842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.892906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.893175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.893239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.893521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.893557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.893770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.893833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.894095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.894159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.894422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.894465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.894670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.894734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.894961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.895025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.895264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.895328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.895595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.895631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.895861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.895925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.896200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.896234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.896455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.896513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.896730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.896794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.897028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.897062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.897275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.897349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.897631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.897667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.897849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.897883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.898114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.898178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.898474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.898540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.898804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.898839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.899030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.899094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.899356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.899421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.899678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.899713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.899929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.899993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.900261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.900324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.900599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.900635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.900841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.900905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.469 [2024-07-24 19:21:51.901165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.469 [2024-07-24 19:21:51.901228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.469 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.901468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.901504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.901680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.901758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.902032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.902095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.902378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.902412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.902651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.902715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.902957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.903020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.903303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.903367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.903626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.903662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.903950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.904013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.904254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.904317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.904559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.904594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.904786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.904851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.905076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.905140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.905382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.905463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.905662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.905697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.905974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.906009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.906238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.906302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.906539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.906605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.906869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.906905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.907151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.907215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.907462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.907527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.907791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.907827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.908029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.908092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.908354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.908417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.908670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.908704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.908959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.909022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.909292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.909366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.909659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.909694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.909919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.909983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.910246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.910310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.910585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.910621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.910862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.910927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.911157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.911221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.911481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.911517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.911711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.911775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.911986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.912050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.912317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.912352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.470 qpair failed and we were unable to recover it. 00:29:46.470 [2024-07-24 19:21:51.912549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.470 [2024-07-24 19:21:51.912615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.912880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.912943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.913183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.913218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.913404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.913500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.913774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.913838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.914124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.914158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.914329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.914393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.914684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.914747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.915011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.915046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.915256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.915320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.915537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.915603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.915871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.915906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.916115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.916178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.916423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.916498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.916716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.916751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.916996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.917059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.917323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.917387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.917703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.917739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.917972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.918031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.918206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.918243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.918506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.918542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.918747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.918811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.919079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.919144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.919418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.919461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.919676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.919740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.920000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.920064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.920324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.920358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.920545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.920611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.920882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.920945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.921218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.921259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.921503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.921568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.921847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.921911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.922184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.922218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.922460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.922525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.922762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.922826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.923089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.923124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.923339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.923403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.923671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.471 [2024-07-24 19:21:51.923736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.471 qpair failed and we were unable to recover it. 00:29:46.471 [2024-07-24 19:21:51.923983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.924018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.924202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.924265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.924514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.924550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.924752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.924787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.924997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.925062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.925342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.925406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.925710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.925745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.925994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.926058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.926305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.926369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.926645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.926680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.926942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.927006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.927213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.927276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.927539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.927575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.927788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.927852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.928048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.928112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.928397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.928441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.928687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.928750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.928982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.929045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.929306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.929341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.929534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.929599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.929833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.929897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.930164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.930199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.930389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.930469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.930743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.930806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.931067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.931310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.931373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.931650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.931686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.931885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.931920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.932100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.932163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.932422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.932506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.932714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.932749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.932966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.933039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.933280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.933343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.933654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.933690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.933930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.933994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.934265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.934328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.934615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.934651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.934889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.472 [2024-07-24 19:21:51.934953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.472 qpair failed and we were unable to recover it. 00:29:46.472 [2024-07-24 19:21:51.935225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.935288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.935561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.935597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.935803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.935867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.936128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.936191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.936436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.936472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.936694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.936757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.936993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.937057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.937332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.937367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.937563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.937629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.937902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.937965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.938204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.938239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.938442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.938508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.938782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.938846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.939121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.939156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.939386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.939468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.939731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.939796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.940035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.940070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.940287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.940350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.940642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.940678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.940853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.940888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.941102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.941166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.941453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.941518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.941795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.941830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.942089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.942152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.942410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.942491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.942733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.942768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.942984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.943047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.943325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.943389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.943658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.943693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.943930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.943994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.944222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.944287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.944519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.473 [2024-07-24 19:21:51.944555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.473 qpair failed and we were unable to recover it. 00:29:46.473 [2024-07-24 19:21:51.944728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.944792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.945021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.945095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.945360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.945394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.945611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.945676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.945942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.946007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.946279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.946314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.946521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.946586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.946846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.946910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.947178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.947212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.947453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.947519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.947791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.947856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.948127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.948162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.948392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.948494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.948731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.948795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.949041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.949076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.949294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.949358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.949610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.949646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.949849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.949883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.950091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.950154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.950384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.950465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.950754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.950789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.951015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.951079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.951347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.951411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.951681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.951716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.951930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.951994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.952251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.952314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.952612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.952648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.952853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.952917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.953168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.953231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.953490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.953526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.953765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.953830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.954088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.954151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.954416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.954459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.954668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.954732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.954990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.955053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.955317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.955352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.955541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.955607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.955840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.474 [2024-07-24 19:21:51.955903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.474 qpair failed and we were unable to recover it. 00:29:46.474 [2024-07-24 19:21:51.956135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.956170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.956410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.956488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.956763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.956827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.957060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.957100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.957315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.957378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.957659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.957734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.957999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.958034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.958217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.958281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.958526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.958592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.958832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.958867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.959073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.959137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.959416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.959498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.959779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.959816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.960062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.960127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.960399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.960496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.960754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.960970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.961026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.961271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.961317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.961525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.961562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.961770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.961815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.962049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.962097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.962333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.962370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.962620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.962687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.962946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.963025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.963302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.963338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.963544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.963582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.963767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.963812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.964059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.964095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.964252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.964287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.964449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.964487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.964702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.964759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.964986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.965023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.965241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.965297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.965468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.965504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.965681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.965717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.965939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.966000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.966241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.966295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.966502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.475 [2024-07-24 19:21:51.966537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.475 qpair failed and we were unable to recover it. 00:29:46.475 [2024-07-24 19:21:51.966760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.966817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.967005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.967058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.967234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.967287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.967492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.967532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.967782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.967855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.968138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.968221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.968493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.968530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.968783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.968855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.969052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.969098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.969328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.969381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.969645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.969705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.969931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.969979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.970221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.970286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.970560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.970597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.970797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.970863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.971174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.971221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.971465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.971509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.971749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.971795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.972024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.972071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.972297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.972361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.972643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.972681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.972942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.973007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.973290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.973337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.973546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.973582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.973750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.973787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.973970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.974005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.974220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.974287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.974576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.974613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.974802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.974838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.975045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.975109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.975379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.975485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.975710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.975745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.975933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.975970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.976160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.976225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.976511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.976548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.976684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.976720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.976958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.977026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.977317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.476 [2024-07-24 19:21:51.977391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.476 qpair failed and we were unable to recover it. 00:29:46.476 [2024-07-24 19:21:51.977677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.977715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.977930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.978002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.978299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.978364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.978648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.978685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.978866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.978909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.979089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.979154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.979418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.979506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.979693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.979735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.979908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.979946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.980166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.980235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.980473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.980531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.980719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.980755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.980927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.981002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.981250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.981316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.981591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.981636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.981853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.981888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.982091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.982128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.982308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.982373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.982664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.982701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.982906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.982960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.983195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.983230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.983423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.983509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.983797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.983863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.984124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.984161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.984398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.984488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.984758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.984835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.985106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.985141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.985298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.985333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.985565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.985602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.985776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.985813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.986024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.986059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.986290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.986343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.986627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.986670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.986900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.986936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.987130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.987166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.987414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.987507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.987675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.987711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.477 [2024-07-24 19:21:51.987888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.477 [2024-07-24 19:21:51.987923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.477 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.988134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.988170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.988362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.988455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.988700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.988737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.988940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.988976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.989173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.989238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.989506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.989549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.989764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.989799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.989988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.990024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.990231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.990297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.990561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.990604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.990744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.990779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.991003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.991070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.991354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.991449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.991650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.991688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.991919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.991998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.992294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.992359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.992639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.992684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.992879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.992915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.993171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.993238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.993508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.993544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.993721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.993757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.993961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.993996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.994230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.994296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.994579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.994654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.994940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.994977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.995165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.995201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.995395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.995493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.995655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.995692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.995910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.995945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.996126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.996163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.996376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.996457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.478 qpair failed and we were unable to recover it. 00:29:46.478 [2024-07-24 19:21:51.996692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.478 [2024-07-24 19:21:51.996728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.996905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.996941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.997191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.997264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.997483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.997525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.997711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.997747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.997985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.998054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.998356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.998422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.998713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.998786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.999077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.999142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.999415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.999508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.999685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.999722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:51.999928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:51.999964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.000181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.000245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.000524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.000562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.000774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.000810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.001044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.001111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.001385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.001490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.001692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.001728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.001945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.002011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.002310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.002374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.002647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.002685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.002859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.002894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.003070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.003146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.003471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.003529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.003754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.003790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.003957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.004000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.004229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.004294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.004563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.004604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.004835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.004872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.005075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.005112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.005338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.005404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.005686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.005723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.005935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.005972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.006227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.006291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.006564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.006610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.006787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.006822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.006996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.007033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.007246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.007311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.479 qpair failed and we were unable to recover it. 00:29:46.479 [2024-07-24 19:21:52.007600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.479 [2024-07-24 19:21:52.007637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.007839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.007921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.008196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.008261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.008529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.008574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.008766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.008802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.009005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.009043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.009257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.009322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.009642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.009685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.009863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.009901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.010113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.010149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.010426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.010509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.010731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.010795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.011047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.011084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.011276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.011341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.011646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.011683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.011859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.011904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.012119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.012185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.012469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.012550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.012802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.012838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.012976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.013021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.013237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.013310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.013571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.013609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.013783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.013818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.014018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.014054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.014258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.014301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.014494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.014530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.014734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.014771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.014939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.014974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.015145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.015185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.015335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.015370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.015640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.015677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.015903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.015967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.016244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.016311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.016542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.016579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.016797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.016864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.017098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.017163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.017438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.017476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.017694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.480 [2024-07-24 19:21:52.017730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.480 qpair failed and we were unable to recover it. 00:29:46.480 [2024-07-24 19:21:52.017918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.017954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.018161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.018197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.018417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.018473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.018715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.018781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.019044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.019079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.019333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.019397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.019703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.019769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.020019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.020053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.020256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.020321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.020564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.020607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.020796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.020831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.021069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.021134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.021398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.021485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.021681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.021716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.021927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.021991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.022251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.022316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.022568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.022604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.022799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.022863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.023131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.023195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.023474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.023510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.023754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.023818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.024078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.024142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.024379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.024413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.024605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.024670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.024935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.024998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.025241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.025276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.025519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.025586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.025854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.025917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.026160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.026195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.026364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.026444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.026734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.026799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.027063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.027098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.027318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.027381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.027696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.027758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.028036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.028071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.028310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.028374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.028682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.028748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.481 [2024-07-24 19:21:52.029017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.481 [2024-07-24 19:21:52.029052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.481 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.029259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.029324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.029608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.029644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.029820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.029855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.030066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.030130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.030369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.030453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.030698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.030733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.030971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.031034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.031309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.031373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.031658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.031694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.031920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.031984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.032247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.032311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.032594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.032636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.032852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.032915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.033177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.033241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.033510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.033546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.033791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.033855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.034115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.034180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.034459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.034495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.034681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.034746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.034988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.035052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.035292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.035327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.035498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.035564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.035833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.035898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.036162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.036197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.036414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.036521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.036807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.036872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.037144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.037179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.037384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.037473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.037745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.037809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.038037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.038071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.038258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.038323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.038583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.038619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.038796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.038831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.039061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.039124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.039359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.482 [2024-07-24 19:21:52.039423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.482 qpair failed and we were unable to recover it. 00:29:46.482 [2024-07-24 19:21:52.039628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.039664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.039856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.039920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.040200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.040264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.040549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.040586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.040826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.040890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.041161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.041224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.041466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.041503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.041688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.041754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.042027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.042091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.042355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.042390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.042610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.042677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.042941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.043005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.043235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.043270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.043484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.043551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.043826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.043890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.044175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.044210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.044373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.044466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.044745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.044810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.045078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.045113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.045343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.045406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.045666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.045731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.045994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.046031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.046228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.046291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.046561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.046627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.046839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.046874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.047091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.047155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.047389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.047479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.047686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.047721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.047921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.047986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.048229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.048293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.048580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.048616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.048804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.048868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.049098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.049163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.049357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.483 [2024-07-24 19:21:52.049392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.483 qpair failed and we were unable to recover it. 00:29:46.483 [2024-07-24 19:21:52.049597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.049663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.049899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.049964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.050227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.050261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.050483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.050550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.050813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.050878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.051148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.051183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.051386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.051465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.051744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.051807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.052047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.052082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.052282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.052346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.052623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.052658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.052863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.052899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.053101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.053166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.053454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.053518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.053802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.053837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.054076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.054141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.054404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.054486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.054762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.054798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.055006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.055071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.055315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.055379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.055650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.055686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.055923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.055988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.056220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.056293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.056569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.056606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.056826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.056890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.057133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.057197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.057476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.057513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.057722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.057788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.058053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.058116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.058382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.058417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.058622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.058686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.058934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.059000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.059229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.059265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.059460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.059525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.059760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.059825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.060064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.060099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.060323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.060387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.060666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.484 [2024-07-24 19:21:52.060738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.484 qpair failed and we were unable to recover it. 00:29:46.484 [2024-07-24 19:21:52.061011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.061046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.061251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.061315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.061561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.061596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.061743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.061785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.061986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.062051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.062289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.062354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.062636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.062671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.062879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.062944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.063200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.063264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.063518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.063555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.063767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.063832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.064085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.064149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.064434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.064470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.064637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.064703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.064942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.065005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.065240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.065274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.065462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.065527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.065789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.065852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.066117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.066152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.066372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.066450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.066694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.066759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.067024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.067060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.067222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.067286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.067546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.067613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.067859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.067900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.068115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.068179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.068415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.068495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.068749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.068784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.068988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.069052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.069292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.069357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.069632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.069669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.069854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.069918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.070139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.070203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.070459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.070511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.070712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.070777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.071027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.071092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.071376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.071478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.071662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.485 [2024-07-24 19:21:52.071728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.485 qpair failed and we were unable to recover it. 00:29:46.485 [2024-07-24 19:21:52.072010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.072075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.072341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.072375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.072546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.072612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.072882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.072946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.073186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.073222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.073399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.073491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.073733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.073798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.074062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.074098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.074319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.074384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.074633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.074698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.074984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.075020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.075213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.075277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.075504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.075540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.075736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.075771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.075982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.076046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.076308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.076373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.076617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.076653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.076830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.076894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.077155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.077219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.077498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.077534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.077737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.077802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.078060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.078125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.078363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.078399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.078605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.078671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.078931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.078996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.079271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.079306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.079510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.079585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.079822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.079887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.080126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.080161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.080360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.080424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.080729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.080795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.081063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.081098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.081290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.081354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.081652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.081689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.081927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.081962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.082152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.082215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.082460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.082525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.486 [2024-07-24 19:21:52.082783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.486 [2024-07-24 19:21:52.082818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.486 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.083019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.083083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.083300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.083364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.083616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.083653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.083839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.083904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.084177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.084241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.084508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.084544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.084730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.084795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.085059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.085124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.085399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.085442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.085656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.085720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.085982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.086047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.086282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.086317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.086500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.086565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.086834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.086898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.087173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.087208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.087468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.087534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.087773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.087837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.088112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.088147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.088386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.088467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.088706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.088771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.089032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.089068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.089288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.089353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.089643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.089680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.089823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.089858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.090039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.090103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.090366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.090450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.090720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.090755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.090968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.091032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.091313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.091388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.091649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.091684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.091886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.091950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.092208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.487 [2024-07-24 19:21:52.092273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.487 qpair failed and we were unable to recover it. 00:29:46.487 [2024-07-24 19:21:52.092510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.092546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.092733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.092797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.093064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.093128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.093366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.093401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.093604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.093669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.093930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.093994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.094234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.094269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.094426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.094504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.094779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.094843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.095088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.095123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.095331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.095397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.095611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.095675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.095944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.095979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.096202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.096266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.096503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.096569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.096832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.096867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.097089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.097153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.097388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.097490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.097700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.097735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.097911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.097975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.098233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.098297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.098561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.098596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.098796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.098861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.099100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.099165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.099439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.099474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.099720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.099785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.100034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.100097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.100342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.100376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.100573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.100638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.100869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.100933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.101217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.101252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.101484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.101550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.101810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.101875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.102122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.102157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.102354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.102417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.102678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.102742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.102979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.103019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.103252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.488 [2024-07-24 19:21:52.103316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.488 qpair failed and we were unable to recover it. 00:29:46.488 [2024-07-24 19:21:52.103546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.103611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.103887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.103922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.104126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.104189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.104459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.104524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.104756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.104790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.104991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.105054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.105328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.105391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.105672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.105708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.105930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.105994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.106240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.106304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.106544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.106580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.106778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.106842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.107112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.107175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.107471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.107507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.107664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.107732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.107994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.108056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.108334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.108397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.108660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.108730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.108994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.109058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.109353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.109417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.109650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.109712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.109958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.110021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.110300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.110335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.110514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.110579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.110847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.110911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.111181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.111217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.111407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.111485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.111759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.111824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.112075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.112110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.112298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.112362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.112629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.112664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.112867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.112902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.113152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.113217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.113490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.113555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.113824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.113859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.114045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.114109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.114344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.114408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.489 [2024-07-24 19:21:52.114636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.489 [2024-07-24 19:21:52.114671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.489 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.114833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.114907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.115155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.115219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.115490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.115526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.115682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.115749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.116009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.116072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.116357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.116421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.116663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.116699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.116975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.117039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.117310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.117345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.117554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.117832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.117896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.118166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.118200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.118402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.118481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.118731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.118796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.119075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.119110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.119338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.119402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.119658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.119694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.119908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.119943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.120136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.120201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.120480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.120516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.120690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.120724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.120921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.120984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.121243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.121306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.121569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.121605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.121804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.121868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.122124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.122187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.122463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.122520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.122725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.122790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.123063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.123127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.123332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.123396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.123698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.123762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.124000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.124064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.124319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.124383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.124637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.124673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.124923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.124988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.125237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.125272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.125489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.125554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.490 [2024-07-24 19:21:52.125824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.490 [2024-07-24 19:21:52.125888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.490 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.126149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.126183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.126380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.126459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.126695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.126771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.127027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.127061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.127230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.127294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.127542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.127578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.127752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.127787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.128002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.128065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.128337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.128401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.128680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.128715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.128936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.129000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.129270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.129333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.129596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.129631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.129806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.129871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.130140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.130204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.130500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.130536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.130723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.130788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.131029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.131092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.131347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.131411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.131674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.131753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.132011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.132074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.132374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.132451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.132675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.132739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.132999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.133062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.133324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.133358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.133551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.133617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.133854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.133918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.134193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.134228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.134425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.134508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.134782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.134847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.135122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.135157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.135326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.135390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.135739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.135805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.136046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.491 [2024-07-24 19:21:52.136081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.491 qpair failed and we were unable to recover it. 00:29:46.491 [2024-07-24 19:21:52.136276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.136340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.136572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.136607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.136772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.136807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.136993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.137056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.137319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.137382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.137676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.137712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.137961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.138025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.138214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.138278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.138514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.138555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.138747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.138811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.139055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.139119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.139416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.139502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.139722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.139786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.140044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.140108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.140383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.140461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.140676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.140731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.140901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.140934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.141139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.141174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.141392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.141491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.141735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.141798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.142078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.142112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.142309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.142372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.142690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.142763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.143032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.143066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.143221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.143254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.143474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.143510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.143671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.143706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.143932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.143988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.144185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.144251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.144510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.144547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.144732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.144796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.492 qpair failed and we were unable to recover it. 00:29:46.492 [2024-07-24 19:21:52.145043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.492 [2024-07-24 19:21:52.145076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.493 qpair failed and we were unable to recover it. 00:29:46.493 [2024-07-24 19:21:52.145302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.493 [2024-07-24 19:21:52.145366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.493 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.145628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.145665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.145900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.145965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.146179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.146215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.146445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.146510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.146740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.146805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.147070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.147119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.147271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.147304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.147504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.147539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.147719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.147752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.147948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.148019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.148276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.148311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.148540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.148575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.148763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.148828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.149052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.149086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.149219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.149253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.149459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.149494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.149698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.149731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.149934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.149984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.150120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.150153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.150401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.150494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.150700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.150733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.150945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.151010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.151270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.151334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.151584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.151620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.151837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.151901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.152168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.152232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.152447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.152484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.152703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.152769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.153029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.153093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.153345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.153380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.153582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.153648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.153891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.153956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.768 [2024-07-24 19:21:52.154207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.768 [2024-07-24 19:21:52.154243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.768 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.154462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.154783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.154848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.155108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.155144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.155310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.155375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.155621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.155657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.155850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.155886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.156108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.156172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.156446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.156513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.156689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.156725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.156922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.156995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.157224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.157289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.157523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.157559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.157709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.157773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.157975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.158040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.158238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.158272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.158484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.158550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.158754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.158818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.159045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.159080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.159255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.159320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.159584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.159620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.159760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.159795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.159941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.160005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.160254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.160318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.160598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.160633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.160817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.160882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.161134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.161198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.161452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.161489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.161653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.161727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.161951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.162016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.162238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.162273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.162412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.162489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.162716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.162781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.163005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.163040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.163219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.163282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.163534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.163600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.163854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.163889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.769 [2024-07-24 19:21:52.164098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.769 [2024-07-24 19:21:52.164162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.769 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.164385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.164465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.164697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.164732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.164948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.165012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.165233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.165297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.165540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.165575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.165780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.165845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.166119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.166183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.166414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.166459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.166646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.166711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.166981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.167045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.167307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.167342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.167570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.167635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.167901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.167976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.168213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.168249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.168470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.168535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.168809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.168874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.169152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.169187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.169444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.169509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.169780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.169845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.170114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.170148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.170335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.170399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.170651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.170686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.170889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.170924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.171149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.171214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.171486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.171552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.171816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.171852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.172016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.172081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.172344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.172408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.172672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.172707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.172936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.173000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.173237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.173301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.173525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.173560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.173755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.173819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.174057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.174121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.174392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.174471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.770 [2024-07-24 19:21:52.174726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.770 [2024-07-24 19:21:52.174791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.770 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.175028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.175092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.175326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.175361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.175553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.175618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.175824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.175889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.176129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.176164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.176349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.176413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.176648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.176684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.176939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.176974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.177171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.177235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.177493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.177530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.177734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.177769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.178015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.178079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.178289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.178354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.178627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.178662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.178859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.178923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.179195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.179259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.179506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.179547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.179746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.179811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.180079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.180144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.180359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.180394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.180631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.180697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.180961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.181025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.181257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.181292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.181519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.181585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.181813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.181878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.182156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.182191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.182363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.182427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.182714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.182778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.183011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.183046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.183201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.183266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.183542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.183607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.183878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.183914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.184097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.184160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.184389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.184466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.184713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.184749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.771 [2024-07-24 19:21:52.184970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.771 [2024-07-24 19:21:52.185034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.771 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.185272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.185336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.185623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.185659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.185902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.185966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.186238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.186303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.186564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.186600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.186760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.186824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.187059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.187123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.187387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.187478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.187711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.187775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.188049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.188114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.188402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.188486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.188714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.188779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.189049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.189114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.189351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.189385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.189612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.189678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.189916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.189979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.190216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.190249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.190422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.190516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.190754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.190835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.191039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.191072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.191242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.191331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.191610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.191645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.191820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.191855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.192044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.192077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.192236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.192294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.192555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.192591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.192766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.192829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.193090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.193154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.193438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.772 [2024-07-24 19:21:52.193474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.772 qpair failed and we were unable to recover it. 00:29:46.772 [2024-07-24 19:21:52.193697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.193761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.194023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.194087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.194358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.194393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.194593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.194658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.194919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.194983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.195231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.195266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.195501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.195566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.195801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.195865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.196126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.196160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.196366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.196448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.196703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.196769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.197038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.197073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.197291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.197355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.197635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.197671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.197903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.197938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.198086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.198149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.198410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.198493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.198771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.198805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.199028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.199093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.199360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.199424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.199690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.199725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.199945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.200009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.200253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.200317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.200591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.200626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.200837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.200901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.201145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.201210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.201461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.201497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.201662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.201727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.201997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.202060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.202302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.202336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.202563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.202628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.202870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.202943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.203208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.203243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.203475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.203541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.203812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.203875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.204117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.204152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.204391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.773 [2024-07-24 19:21:52.204470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.773 qpair failed and we were unable to recover it. 00:29:46.773 [2024-07-24 19:21:52.204718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.204782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.205014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.205048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.205232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.205296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.205541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.205577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.205776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.205811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.205975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.206039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.206290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.206354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.206631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.206668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.206850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.206914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.207172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.207235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.207475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.207510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.207707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.207772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.208038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.208102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.208324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.208359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.208584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.208649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.208889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.208952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.209217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.209252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.209490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.209555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.209812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.209881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.210159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.210194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.210447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.210516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.210806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.210872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.211153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.211191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.211390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.211477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.211758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.211825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.212089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.212124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.212282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.212319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.212544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.212611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.212852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.212888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.213099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.213163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.213419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.213512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.213698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.213733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.213916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.213977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.214219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.214289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.214563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.214606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.214813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.214850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.774 qpair failed and we were unable to recover it. 00:29:46.774 [2024-07-24 19:21:52.215082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.774 [2024-07-24 19:21:52.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.215390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.215426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.215597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.215633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.215798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.215835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.216017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.216053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.216276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.216353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.216640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.216676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.216907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.216943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.217201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.217281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.217553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.217590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.217807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.217843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.218089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.218153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.218449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.218505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.218691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.218727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.218901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.218937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.219222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.219286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.219523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.219561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.219765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.219800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.219986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.220022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.220197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.220237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.220453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.220491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.220664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.220700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.220841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.220881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.221099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.221165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.221511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.221579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.221838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.221874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.222201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.222265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.222542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.222608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.222872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.222908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.223134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.223201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.223465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.223547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.223874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.223953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.224226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.224291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.224577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.224643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.224950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.224987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.225178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.225243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.225522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.225591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.775 qpair failed and we were unable to recover it. 00:29:46.775 [2024-07-24 19:21:52.225907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.775 [2024-07-24 19:21:52.225948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.226179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.226221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.226483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.226520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.226706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.226742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.227013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.227091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.227399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.227490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.227763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.227799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.228062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.228126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.228412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.228503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.228758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.228796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.229053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.229118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.229446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.229518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.229789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.229826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.229968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.230004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.230217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.230282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.230569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.230606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.230782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.230855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.231174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.231251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.231530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.231570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.231846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.231911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.232225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.232291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.232523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.232560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.232735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.232772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.232958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.233023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.233300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.233336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.233513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.233549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.233694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.233730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.233911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.233947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.234184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.234263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.234504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.234572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.234842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.234878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.235111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.235176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.235461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.235548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.235797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.235833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.236026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.236094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.236410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.236503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.236698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.776 [2024-07-24 19:21:52.236735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.776 qpair failed and we were unable to recover it. 00:29:46.776 [2024-07-24 19:21:52.236933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.236998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.237259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.237326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.237594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.237634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.237836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.237873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.238094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.238181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.238494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.238569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.238882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.238961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.239280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.239345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.239631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.239668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.239941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.240006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.240300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.240367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.240632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.240671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.240949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.241014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.241312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.241378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.241664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.241701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.241919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.241956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.242243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.242319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.242615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.242652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.243006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.243073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.243375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.243465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.243739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.243776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.244030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.244094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.244453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.244521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.244812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.244855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.245033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.245068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.245345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.245411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.245746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.245783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.246128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.246194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.246485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.246553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.246817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.246854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.247089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.777 [2024-07-24 19:21:52.247161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.777 qpair failed and we were unable to recover it. 00:29:46.777 [2024-07-24 19:21:52.247494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.247563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.247825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.247861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.248056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.248120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.248388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.248474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.248718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.248754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.248927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.248963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.249192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.249257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.249507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.249545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.249764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.249828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.250130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.250197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.250524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.250560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.250781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.250818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.251112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.251187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.251487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.251528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.251740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.251809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.252150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.252215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.252514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.252579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.252878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.252955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.253300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.253367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.253647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.253684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.253932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.253997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.254277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.254356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.254687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.254723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.254916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.254981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.255244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.255322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.255610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.255646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.255849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.255885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.256078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.256143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.256426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.256501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.256752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.256788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.256973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.257040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.257306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.257346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.257597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.257635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.257840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.257876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.258054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.258094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.258384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.258470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.258732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.778 [2024-07-24 19:21:52.258772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.778 qpair failed and we were unable to recover it. 00:29:46.778 [2024-07-24 19:21:52.258981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.259017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.259303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.259376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.259656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.259692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.259892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.259929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.260180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.260259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.260587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.260625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.260830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.260867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.261081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.261145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.261460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.261517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.261722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.261758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.261944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.261980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.262199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.262265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.262579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.262616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.262842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.262878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.263099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.263164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.263446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.263484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.263668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.263708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.263902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.263939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.264140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.264177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.264474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.264540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.264835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.264901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.265215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.265256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.265446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.265483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.265673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.265712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.265922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.265958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.266243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.266317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.266639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.266675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.266914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.266950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.267275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.267355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.267712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.267749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.268026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.268093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.268371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.268460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.268702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.268737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.268956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.268993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.269271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.269340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.269676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.269713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.269894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.779 [2024-07-24 19:21:52.269930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.779 qpair failed and we were unable to recover it. 00:29:46.779 [2024-07-24 19:21:52.270127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.270191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.270494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.270563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.270832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.270867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.271107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.271173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.271470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.271509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.271740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.271777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.271989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.272025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.272252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.272317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.272677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.272714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.272927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.272963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.273239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.273305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.273624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.273661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.273873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.273910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.274199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.274264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.274569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.274606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.274819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.274861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.275090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.275155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.275464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.275502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.275713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.275778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.276033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.276109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.276351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.276416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.276678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.276715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.276938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.276974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.277177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.277213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.277459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.277529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.277739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.277791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.278118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.278178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.278471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.278532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.278691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.278727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.278946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.278982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.279208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.279272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.279571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.279652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.279961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.279996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.280230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.280296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.280586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.280666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.780 [2024-07-24 19:21:52.280975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.780 [2024-07-24 19:21:52.281011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.780 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.281188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.281224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.281475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.281511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.281724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.281760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.281968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.282003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.282220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.282280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.282591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.282628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.282773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.282808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.283022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.283089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.283397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.283510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.283729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.283766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.283948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.283984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.284159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.284195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.284458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.284524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.284852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.284919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.285212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.285270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.285590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.285627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.285795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.285862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.286128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.286165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.286342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.286378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.286605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.286645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.286870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.286906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.287114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.287150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.287373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.287473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.287722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.287765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.287983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.288021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.288254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.288318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.288650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.288688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.288893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.288937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.289148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.289212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.289494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.289539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.289706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.289743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.289951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.289987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.290208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.290243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.290523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.290560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.290782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.290847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.291111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.291177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.291494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.781 [2024-07-24 19:21:52.291530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.781 qpair failed and we were unable to recover it. 00:29:46.781 [2024-07-24 19:21:52.291741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.291778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.291982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.292028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.292240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.292275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.292477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.292519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.292733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.292768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.293069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.293135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.293455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.293521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.293819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.293855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.294105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.294186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.294514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.294581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.294873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.294910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.295128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.295164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.295382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.295418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.295690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.295745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.295942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.295998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.296206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.296264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.296462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.296523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.296726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.296763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.296986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.297044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.297213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.297271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.297486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.297543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.297766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.297823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.298064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.298130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.298340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.298376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.298564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.298600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.298832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.298889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.299072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.299128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.299343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.299379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.299617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.299674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.299848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.299907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.300144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.300202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.782 [2024-07-24 19:21:52.300436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.782 [2024-07-24 19:21:52.300473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.782 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.300638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.300673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.300903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.300963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.301188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.301245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.301422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.301467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.301648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.301683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.301910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.301967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.302207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.302262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.302488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.302524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.302723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.302778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.302980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.303038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.303249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.303286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.303484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.303553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.303786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.303840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.304074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.304131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.304350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.304385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.304580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.304639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.304827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.304891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.305078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.305138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.305327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.305364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.305560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.305619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.305836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.305893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.306124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.306186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.306380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.306415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.306663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.306723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.306954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.307011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.307228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.307283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.307484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.307556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.307785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.307852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.308082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.308139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.308342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.308381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.308590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.308648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.308876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.308937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.309175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.309247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.309438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.309477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.309665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.309720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.309932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.309989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.783 [2024-07-24 19:21:52.310192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.783 [2024-07-24 19:21:52.310248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.783 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.310422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.310474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.310711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.310764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.310960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.311016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.311231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.311288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.311509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.311572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.311776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.311838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.312032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.312087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.312264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.312302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.312495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.312558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.312738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.312794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.313017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.313081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.313291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.313330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.313537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.313595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.313822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.313879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.314074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.314129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.314334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.314373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.314603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.314663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.314898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.314954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.315169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.315225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.315425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.315473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.315655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.315710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.315904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.315960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.316184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.316243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.316440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.316477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.316688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.316752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.316982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.317046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.317227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.317283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.317485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.317552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.317748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.317804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.317995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.318051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.318222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.318258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.318459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.318515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.318757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.318822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.319007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.319061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.319236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.319271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.319482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.319541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.784 [2024-07-24 19:21:52.319783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.784 [2024-07-24 19:21:52.319848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.784 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.320075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.320131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.320363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.320399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.320627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.320686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.320923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.320977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.321197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.321252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.321439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.321475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.321702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.321758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.321955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.322010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.322226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.322281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.322507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.322569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.322789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.322846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.323077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.323134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.323316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.323352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.323576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.323632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.323858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.323911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.324133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.324188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.324396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.324439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.324670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.324740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.324967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.325020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.325217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.325273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.325493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.325549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.325768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.325823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.326034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.326089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.326250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.326292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.326447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.326483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.326709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.326765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.326983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.327038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.327241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.327281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.327496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.327556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.327776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.327832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.328059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.328111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.328285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.328320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.328506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.328565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.328755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.328810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.329020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.329073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.329235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.329270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.329506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.785 [2024-07-24 19:21:52.329559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.785 qpair failed and we were unable to recover it. 00:29:46.785 [2024-07-24 19:21:52.329743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.329796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.330013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.330068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.330242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.330275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.330475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.330510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.330729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.330783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.330967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.331022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.331228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.331262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.331463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.331515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.331745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.331805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.332035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.332088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.332296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.332330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.332514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.332569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.332797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.332853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.333084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.333141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.333368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.333403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.333566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.333623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.333855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.333910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.334102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.334158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.334329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.334364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.334580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.334638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.334855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.334912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.335131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.335185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.335346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.335382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.335595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.335652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.335833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.335887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.336116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.336170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.336350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.336385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.336616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.336671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.336864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.336918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.337109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.337166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.337367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.337407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.337654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.337716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.337934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.337988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.338177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.338233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.338446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.338482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.786 [2024-07-24 19:21:52.338671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.786 [2024-07-24 19:21:52.338727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.786 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.338957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.339013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.339218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.339274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.339482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.339530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.339755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.339812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.340074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.340129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.340315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.340350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.340565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.340623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.340855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.340909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.341126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.341181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.341384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.341420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.341645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.341713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.341931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.341986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.342179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.342233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.342462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.342499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.342687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.342746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.342968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.343024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.343211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.343266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.343475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.343511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.343731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.343785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.343983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.344038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.344267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.344323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.344544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.344601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.344819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.344874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.345053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.345108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.345281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.345316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.345537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.345595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.345797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.345851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.346082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.346137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.346274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.346310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.346487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.346523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.346747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.346782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.347036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.347089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.349762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.787 [2024-07-24 19:21:52.349818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.787 qpair failed and we were unable to recover it. 00:29:46.787 [2024-07-24 19:21:52.350083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.350140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.350384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.350463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.350703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.350759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.351000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.351056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.351226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.351283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.351511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.351549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.351708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.351764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.351986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.352039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.352260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.352314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.352507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.352564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.352767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.352803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.353038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.353094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.353317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.353352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.353566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.353621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.353819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.353874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.354074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.354130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.354278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.354313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.354499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.354562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.354753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.354807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.355018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.355075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.355251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.355286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.355483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.355548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.355737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.355793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.355981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.356036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.356209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.356244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.356424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.356469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.356667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.356724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.356966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.357023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.357260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.357317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.357546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.357604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.357791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.357844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.358006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.358057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.358237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.358273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.358485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.358521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.358742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.358796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.358979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.359032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.359232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.788 [2024-07-24 19:21:52.359268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.788 qpair failed and we were unable to recover it. 00:29:46.788 [2024-07-24 19:21:52.359484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.359521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.359739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.359795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.359970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.360024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.360194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.360230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.360412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.360462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.360668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.360726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.360925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.360982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.361201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.361257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.361439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.361475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.361647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.361702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.361885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.361940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.362169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.362225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.362424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.362478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.362700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.362757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.362950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.363005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.363223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.363278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.363499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.363558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.363786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.363841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.364071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.364128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.364299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.364334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.364554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.364610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.364803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.364858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.365080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.365136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.365344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.365379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.365627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.365683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.365877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.365934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.366150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.366204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.366381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.366416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.366651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.366713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.366906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.366961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.367141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.367197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.367415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.367473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.367706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.367763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.368012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.368067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.368224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.368281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.368509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.368564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.789 [2024-07-24 19:21:52.368875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.789 [2024-07-24 19:21:52.368929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.789 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.369178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.369233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.369446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.369481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.369691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.369726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.369914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.369970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.370201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.370256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.370468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.370503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.370731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.370787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.370982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.371043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.371326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.371382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.371763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.371840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.372119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.372182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.372392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.372442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.372597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.372632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.372854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.372910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.373098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.373153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.373362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.373397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.373633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.373669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.373889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.373944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.374144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.374200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.374413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.374468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.374672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.374707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.374934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.374989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.375210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.375266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.375491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.375528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.375728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.375782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.375998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.376055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.376255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.376291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.376489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.376555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.376754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.376809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.377021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.377075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.377286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.377322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.377540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.377597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.377763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.377819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.378008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.378063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.378271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.378306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.790 [2024-07-24 19:21:52.378518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.790 [2024-07-24 19:21:52.378576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.790 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.378787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.378822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.379014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.379049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.379234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.379270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.379467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.379502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.379662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.379719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.379942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.379996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.380179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.380214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.380387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.380422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.380658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.380713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.380946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.381003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.381223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.381277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.381500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.381568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.381791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.381846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.382067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.382122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.382321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.382356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.382499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.382562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.382748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.382804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.383027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.383081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.383303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.383338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.383562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.383618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.383806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.383860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.384077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.384132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.384275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.384311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.384506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.384562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.384759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.384795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.385001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.385056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.385266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.385301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.385523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.385578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.385779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.385814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.386044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.791 [2024-07-24 19:21:52.386099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.791 qpair failed and we were unable to recover it. 00:29:46.791 [2024-07-24 19:21:52.386305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.386341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.386540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.386596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.386788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.386845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.387055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.387109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.387379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.387415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.387695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.387758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.387988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.388043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.388257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.388314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.388509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.388565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.388753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.388807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.388996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.389052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.389233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.389269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.389474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.389510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.389698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.389755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.389986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.390040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.390257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.390293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.390506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.390566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.390789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.390842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.391035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.391090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.391307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.391342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.391508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.391564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.391797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.391861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.392101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.392156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.392359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.392394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.392626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.392682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.392900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.392959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.393175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.393231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.393404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.393448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.393674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.393733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.393928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.393984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.394171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.394227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.394399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.394443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.394641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.394697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.394886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.394941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.395169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.395225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.395461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.395498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.395687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.792 [2024-07-24 19:21:52.395742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.792 qpair failed and we were unable to recover it. 00:29:46.792 [2024-07-24 19:21:52.395964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.396020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.396199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.396253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.396439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.396475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.396707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.396761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.396953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.397010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.397175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.397232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.397441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.397477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.397700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.397757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.397973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.398031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.398261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.398317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.398492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.398528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.398727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.398784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.399006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.399063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.399245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.399301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.399511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.399568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.399748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.399803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.400039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.400094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.400399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.400442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.400721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.400778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.400963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.401019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.401237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.401290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.401496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.401558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.401751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.401806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.402023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.402078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.402316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.402357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.402589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.402645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.402844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.402900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.403097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.403151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.403355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.403391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.403635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.403697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.403920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.403975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.404203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.404259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.404439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.404475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.404700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.404759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.793 [2024-07-24 19:21:52.404974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.793 [2024-07-24 19:21:52.405034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.793 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.405227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.405283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.405511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.405568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.405777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.405833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.406025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.406081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.406255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.406290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.406500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.406537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.406718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.406754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.406932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.406966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.407188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.407245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.407499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.407567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.407790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.407845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.408038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.408096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.408296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.408331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.408545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.408602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.408819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.408876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.409032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.409085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.409292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.409328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.409542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.409597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.409818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.409873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.410092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.410147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.410348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.410383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.410647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.410703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.410919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.410974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.411197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.411253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.411458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.411494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.411708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.411764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.411942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.411997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.412222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.412278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.412493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.412555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.412752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.412812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.413029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.413086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.413256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.413292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.413461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.413497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.413696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.413750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.413969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.414025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.414200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.414235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.794 [2024-07-24 19:21:52.414412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.794 [2024-07-24 19:21:52.414456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.794 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.414655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.414721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.414944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.414998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.415183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.415239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.415418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.415473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.415706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.415763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.415930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.415985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.416223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.416277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.416558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.416614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.416832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.416888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.417116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.417170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.417355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.417390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.417580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.417636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.417854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.417910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.418134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.418189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.418393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.418437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.418650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.418707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.418944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.418999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.419286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.419347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.419594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.419630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.419862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.419915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.420138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.420195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.420401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.420445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.420623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.420659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.420826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.420878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.421071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.421129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.421333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.421369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.421574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.421629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.421858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.421913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.422104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.422160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.422330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.422365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.422588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.422644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.422831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.422885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.423108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.423170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.423325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.423360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.423550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.423612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.423808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.423863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.795 qpair failed and we were unable to recover it. 00:29:46.795 [2024-07-24 19:21:52.424067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.795 [2024-07-24 19:21:52.424124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.424336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.424371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.424592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.424648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.424858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.424913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.425140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.425194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.425337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.425372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.425564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.425619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.425806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.425861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.426081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.426136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.426322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.426358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.426582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.426637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.426798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.426854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.427069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.427122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.427300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.427335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.427546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.427603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.427791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.427846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.428068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.428124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.428300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.428336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.428547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.428602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.428833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.428888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.429111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.429170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.429371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.429407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.429638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.429697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.429928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.429983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.430212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.430267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.430489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.430545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.430791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.430844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.431141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.431200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.431442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.431478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.431698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.431756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.431950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.432007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.432256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.796 [2024-07-24 19:21:52.432311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.796 qpair failed and we were unable to recover it. 00:29:46.796 [2024-07-24 19:21:52.432555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.432590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.432754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.432811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.433098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.433160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.433360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.433395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.433631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.433686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.433913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.433967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.434276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.434334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.434542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.434577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.434766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.434821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.435026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.435080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.435257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.435292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.435492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.435554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.435795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.435850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.436165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.436221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.436424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.436469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.436640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.436695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.436923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.436980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.437214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.437269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.437490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.437526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.437752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.437805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.438071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.438125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.438378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.438413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.438599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.438634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.438876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.438937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.439171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.439228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.439416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.439459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.439645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.439680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.439902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.439958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.440161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.440214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.440423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.440467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.440671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.440706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.440909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.440968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.441192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.441251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.441450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.441486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.441662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.441697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.441948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.442002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.442273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.797 [2024-07-24 19:21:52.442327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.797 qpair failed and we were unable to recover it. 00:29:46.797 [2024-07-24 19:21:52.442540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.442576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.442802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.442857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.443100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.443134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.443414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.443459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.443677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.443713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.444018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.444053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.444243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.444293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.444528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.444563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.444744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.444777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.444949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.444999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.445279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.445336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.445538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.445593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.445799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.445833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.446031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.446086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.446298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.446332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.446500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.446534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:46.798 [2024-07-24 19:21:52.446769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:46.798 [2024-07-24 19:21:52.446825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:46.798 qpair failed and we were unable to recover it. 00:29:47.061 [2024-07-24 19:21:52.447034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.061 [2024-07-24 19:21:52.447089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.061 qpair failed and we were unable to recover it. 00:29:47.061 [2024-07-24 19:21:52.447335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.061 [2024-07-24 19:21:52.447371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.061 qpair failed and we were unable to recover it. 00:29:47.061 [2024-07-24 19:21:52.447570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.061 [2024-07-24 19:21:52.447625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.061 qpair failed and we were unable to recover it. 00:29:47.061 [2024-07-24 19:21:52.447934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.061 [2024-07-24 19:21:52.447996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.061 qpair failed and we were unable to recover it. 00:29:47.061 [2024-07-24 19:21:52.448281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.448348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.448555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.448590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.448875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.448909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.449092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.449157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.449373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.449407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.449628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.449662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.449931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.449990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.450252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.450302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.450557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.450614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.450758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.450792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.450964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.451014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.451217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.451251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.451531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.451566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.451776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.451839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.452032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.452088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.452304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.452340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.452521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.452586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.452810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.452866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.453086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.453142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.453345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.453380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.453576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.453633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.453847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.453904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.454089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.454144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.454316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.454351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.454574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.454628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.454853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.454909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.455157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.455212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.455377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.455413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.455621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.455657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.455880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.455937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.456160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.456215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.456412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.456472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.456649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.456699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.456898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.456954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.457184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.457239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.457455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.457492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.457692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.457728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.457904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.062 [2024-07-24 19:21:52.457960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.062 qpair failed and we were unable to recover it. 00:29:47.062 [2024-07-24 19:21:52.458201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.458255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.458439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.458475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.458654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.458690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.458913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.458969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.459157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.459213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.459446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.459481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.459647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.459687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.459904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.459959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.460144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.460178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.460378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.460413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.460637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.460672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.460914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.460971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.461208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.461273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.461504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.461541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.461788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.461857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.462092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.462162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.462375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.462411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.462645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.462721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.462969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.463029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.463265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.463330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.463538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.463574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.463757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.463813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.464047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.464108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.464248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.464284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.464463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.464500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.464694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.464760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.464993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.465049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.465253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.465289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.465506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.465566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.465810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.465871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.466147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.466208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.466439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.466481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.466681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.466748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.466982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.467048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.467234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.467291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.467535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.467592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.467825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.467877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.063 [2024-07-24 19:21:52.468040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.063 [2024-07-24 19:21:52.468096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.063 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.468269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.468308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.468489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.468526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.468733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.468769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.468944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.468980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.469198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.469234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.469452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.469488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.469668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.469727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.469970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.470034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.470192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.470254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.470480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.470514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.470742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.470796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.471018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.471069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.471242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.471276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.471504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.471561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.471808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.471861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.472085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.472140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.472341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.472375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.472576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.472637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.472855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.472911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.473136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.473189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.473376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.473412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.473649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.473719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.473938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.474003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.474234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.474290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.474523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.474577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.474803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.474869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.475080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.475140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.475351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.475384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.475573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.475629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.475860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.475916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.476135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.476186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.476395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.476443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.476620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.476672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.476848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.476906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.477146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.477198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.477407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.477450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.477658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.064 [2024-07-24 19:21:52.477695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.064 qpair failed and we were unable to recover it. 00:29:47.064 [2024-07-24 19:21:52.477889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.477951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.493721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.493774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.493995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.494032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.494251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.494304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.494525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.494561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.494740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.494775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.494979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.495033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.495226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.495282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.495500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.495540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.495742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.495775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.495998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.496049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.496355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.496414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.496691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.496726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.496950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.496985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.497193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.497226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.497457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.497491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.497691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.497730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.497950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.498004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.498189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.498241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.498451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.498485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.498688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.498728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.498965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.499023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.499238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.499289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.499518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.499554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.499807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.499866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.500083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.500134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.500311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.500344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.500556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.500617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.500863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.500927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.501163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.501216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.501393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.501425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.501664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.501728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.501936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.501995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.502224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.502278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.502500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.502559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.502771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.502832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.503064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.503121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.503330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.065 [2024-07-24 19:21:52.503363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.065 qpair failed and we were unable to recover it. 00:29:47.065 [2024-07-24 19:21:52.503560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.503622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.503847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.503905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.504068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.504131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.504351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.504384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.504590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.504644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.504862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.504929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.513505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.513574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.513827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.513888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.514089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.514127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.514337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.514373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.514594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.514640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.514830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.514867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.515642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.515686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.515890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.515926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.516148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.516184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.516383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.516419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.516659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.516694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.516912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.516957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.517193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.517271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.517555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.517598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.517867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.517942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.518250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.518325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.518607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.518656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.518898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.518963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.519287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.519364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.519637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.519681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.519989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.520070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.520382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.520482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.520716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.520758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.521069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.521146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.066 [2024-07-24 19:21:52.521457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.066 [2024-07-24 19:21:52.521519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.066 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.521762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.521822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.522148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.522228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.522499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.522536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.522739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.522772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.522992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.523054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.523328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.523390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.523624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.523658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.523871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.523933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.524196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.524257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.524522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.524557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.524767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.524828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.525089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.525151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.525438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.525500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.525747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.525810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.526099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.526160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.526465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.526525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.526729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.526791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.527098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.527160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.527487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.527522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.527733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.527794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.528030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.528092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.528359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.528421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.528674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.528765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.529062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.529137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.529416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.529507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.529756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.529817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.530085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.530147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.530394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.530473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.530715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.530778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.531054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.531116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.531335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.531395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.531681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.531767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.532034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.532096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.532372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.532406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.532613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.532677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.532941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.533005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.533272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.067 [2024-07-24 19:21:52.533307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.067 qpair failed and we were unable to recover it. 00:29:47.067 [2024-07-24 19:21:52.533523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.068 [2024-07-24 19:21:52.533588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.068 qpair failed and we were unable to recover it. 00:29:47.068 [2024-07-24 19:21:52.533826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.068 [2024-07-24 19:21:52.533888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.068 qpair failed and we were unable to recover it. 00:29:47.068 [2024-07-24 19:21:52.534151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.132121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.132519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.132571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.132812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.132859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.133118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.133161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.133401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.133456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.133673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.133743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.134014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.134078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.134333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.134369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.134563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.134598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.134802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.134849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.135087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.135135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.135388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.135424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.135608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.135653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.135817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.135884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.136153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.136222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.136489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.136528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.136707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.136746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.136989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.137036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.137236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.137285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.137486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.137524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.137686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.137733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.137944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.138010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.138288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.138368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.138654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.138691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.138868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.138906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.650 qpair failed and we were unable to recover it. 00:29:47.650 [2024-07-24 19:21:53.139081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.650 [2024-07-24 19:21:53.139129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.139330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.139378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.139654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.139692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.139909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.139985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.140226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.140292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.140568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.140620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.140863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.140900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.141130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.141186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.141425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.141485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.141708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.141755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.142035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.142073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.142319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.142384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.142724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.142782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.143027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.143073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.143293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.143340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.143586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.143624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.143853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.143900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.144133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.144221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.144506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.144552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.144748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.144814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.145089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.145154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.145447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.145505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.145690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.145725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.145957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.146022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.146263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.146328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.146588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.146624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.146804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.146840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.147070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.147136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.147411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.147512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.147758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.147823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.148097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.148133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.148342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.148408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.148713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.148779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.149016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.149080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.149350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.651 [2024-07-24 19:21:53.149387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.651 qpair failed and we were unable to recover it. 00:29:47.651 [2024-07-24 19:21:53.149648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.149685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.149910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.149975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.150261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.150333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.150616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.150654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.150845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.150910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.151180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.151246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.151515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.151583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.151850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.151886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.152106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.152172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.152490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.152559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.152807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.152872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.153140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.153176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.153391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.153487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.153768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.153834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.154073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.154137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.154374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.154410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.154631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.154697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.154967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.155032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.155297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.155363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.155626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.155662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.155879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.155944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.156151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.156217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.156526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.156593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.156858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.156895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.157079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.157145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.157382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.157460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.157694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.157760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.158034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.158070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.158315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.158379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.158668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.158733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.158994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.159060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.159270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.159306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.159478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.159545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.159795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.159860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.160135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.160200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.160484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.652 [2024-07-24 19:21:53.160521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.652 qpair failed and we were unable to recover it. 00:29:47.652 [2024-07-24 19:21:53.160763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.160828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.161097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.161162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.161396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.161482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.161772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.161808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.162047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.162112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.162382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.162464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.162716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.162781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.163048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.163084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.163290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.163354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.163648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.163715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.163996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.164060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.164335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.164372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.164565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.164602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.164827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.164893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.165164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.165228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.165495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.165532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.165716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.165797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.166075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.166141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.166423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.166507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.166803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.166839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.167037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.167103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.167374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.167462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.167735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.167801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.168037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.168073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.168288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.168354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.168667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.168733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.168978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.169044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.169311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.169347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.169517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.169554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.169764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.169830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.170089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.170155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.170396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.170442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.170678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.170743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.170965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.171031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.171238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.171303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.171595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.171632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.171825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.653 [2024-07-24 19:21:53.171890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.653 qpair failed and we were unable to recover it. 00:29:47.653 [2024-07-24 19:21:53.172130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.172195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.172461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.172528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.172761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.172796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.173020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.173086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.173347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.173413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.173703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.173768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.174021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.174057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.174210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.174275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.174479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.174545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.174808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.174874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.175132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.175168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.175383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.175465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.175726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.175791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.176027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.176092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.176372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.176409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.176698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.176765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.177040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.177105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.177410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.177495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.177764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.177800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.178038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.178113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.178392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.178475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.178754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.178819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.179060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.179096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.179297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.179362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.179621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.179686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.179961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.180026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.180301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.180337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.180574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.180611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.180823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.180889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.181147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.181213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.181488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.181525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.181790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.181855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.182119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.182183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.182504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.182570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.182847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.182884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.183064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.654 [2024-07-24 19:21:53.183129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.654 qpair failed and we were unable to recover it. 00:29:47.654 [2024-07-24 19:21:53.183400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.183481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.183751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.183816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.184088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.184124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.184333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.184398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.184699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.184765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.185039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.185104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.185393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.185440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.185680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.185746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.186010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.186075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.186338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.186403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.186699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.186938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.187002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.187269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.187332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.187628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.187665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.187852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.187892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.188104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.188169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.188465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.188533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.188796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.188860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.189110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.189146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.189331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.189398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.189691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.189757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.190031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.190096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.190361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.190396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.190586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.190628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.190862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.190928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.191199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.191264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.191500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.191537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.191730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.191796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.655 qpair failed and we were unable to recover it. 00:29:47.655 [2024-07-24 19:21:53.192061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.655 [2024-07-24 19:21:53.192125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.192405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.192485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.192768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.192805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.193037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.193102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.193348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.193413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.193696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.193761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.194004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.194041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.194228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.194294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.194561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.194626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.194909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.194975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.195247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.195283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.195516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.195582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.195846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.195911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.196174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.196239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.196480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.196517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.196674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.196739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.196999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.197062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.197323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.197387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.197673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.197709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.197937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.198003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.198283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.198348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.198610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.198646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.198832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.198868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.199116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.199182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.199445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.199511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.199782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.199848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.200091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.200127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.200321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.200385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.200638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.200704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.200955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.201020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.201285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.201350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.201617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.201653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.201862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.201926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.202155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.202219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.202446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.202483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.202641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.202716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.202941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.203005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.203239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.656 [2024-07-24 19:21:53.203305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.656 qpair failed and we were unable to recover it. 00:29:47.656 [2024-07-24 19:21:53.203543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.203580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.203756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.203821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.204069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.204135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.204391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.204468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.204726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.204762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.204934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.204999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.205234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.205299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.205529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.205595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.205857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.205893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.206092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.206157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.206416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.206498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.206768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.206834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.207060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.207095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.207251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.207316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.207561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.207597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.207759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.207824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.208094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.208130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.208333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.208397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.208659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.208725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.208963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.209027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.209246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.209281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.209465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.209532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.209758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.209822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.210081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.210145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.210358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.210407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.210613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.210679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.210954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.211018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.211262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.211327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.211585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.211622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.211786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.211852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.212113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.212179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.212422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.212502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.212779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.212816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.213028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.213092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.213374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.213456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.657 [2024-07-24 19:21:53.213730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.657 [2024-07-24 19:21:53.213796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.657 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.214046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.214082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.214293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.214358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.214663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.214729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.214929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.214995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.215267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.215303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.215523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.215589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.215853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.215918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.216182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.216248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.216550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.216587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.216839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.216903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.217159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.217224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.217467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.217533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.217797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.217832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.218076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.218142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.218387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.218465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.218764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.218830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.219104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.219141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.219367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.219449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.219690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.219727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.219995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.220060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.220343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.220379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.220633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.220670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.220910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.220975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.221216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.221281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.221543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.221580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.221796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.221861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.222122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.222187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.222464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.222530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.222799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.222839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.223021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.223089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.223354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.223420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.223734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.223800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.224073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.224109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.658 [2024-07-24 19:21:53.224325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.658 [2024-07-24 19:21:53.224392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.658 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.224691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.224756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.225033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.225098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.225332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.225368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.225572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.225608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.225823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.225888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.226124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.226190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.226461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.226497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.226710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.226775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.227065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.227130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.227371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.227450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.227703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.227739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.227930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.227995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.228255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.228319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.228597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.228663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.228931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.228967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.229190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.229255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.229548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.229614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.229857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.229921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.230148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.230183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.230382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.230464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.230702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.230767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.231022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.231087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.231320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.231356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.231517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.231584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.231845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.231910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.232146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.232210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.232474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.232510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.232736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.232802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.233069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.233134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.233377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.233470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.233673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.233709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.233880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.233946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.659 [2024-07-24 19:21:53.234213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.659 [2024-07-24 19:21:53.234280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.659 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.234533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.234600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.234864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.234906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.235180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.235246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.235561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.235849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.235914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.236163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.236199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.236413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.236499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.236754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.236820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.237035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.237100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.237303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.237339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.241161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.241240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.241542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.241612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.241888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.241957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.242186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.242223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.242462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.242530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.242837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.242916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.243207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.243285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.243594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.243640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.243936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.244015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.244276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.244354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.244642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.244696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.244945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.244987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.245181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.245259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.245567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.245646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.245967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.246052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.246373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.246414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.246651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.246717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.246962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.247028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.247329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.247399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.247642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.247688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.247894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.660 [2024-07-24 19:21:53.247960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.660 qpair failed and we were unable to recover it. 00:29:47.660 [2024-07-24 19:21:53.248209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.248273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.248521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.248589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.248860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.248896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.249145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.249210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.249499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.249567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.249826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.249891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.250159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.250194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.250365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.250448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.250674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.250739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.251027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.251093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.251370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.251411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.251673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.251739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.251986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.252051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.252281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.252347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.252605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.252643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.252829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.252894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.253161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.253226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.253475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.253542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.253824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.253860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.254126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.254191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.254484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.254552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.254785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.254851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.255146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.255197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.255553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.255640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.256014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.256130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.256515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.256631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.256994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.257120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.257524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.257610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.257999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.258089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.258449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.258538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.258798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.258834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.259059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.259124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.259386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.259479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.259714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.259780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.661 qpair failed and we were unable to recover it. 00:29:47.661 [2024-07-24 19:21:53.260054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.661 [2024-07-24 19:21:53.260091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.260334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.260399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.260661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.260728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.261027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.261094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.261404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.261454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.261726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.261793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.262019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.262086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.262375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.262465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.262744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.262780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.262992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.263058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.263310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.263375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.263618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.263656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.263849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.263885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.264125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.264189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.264470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.264537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.264786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.264852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.265094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.265136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.265320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.265385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.265668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.265733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.266038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.266103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.266441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.266508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.266715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.266779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.267098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.267164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.267462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.267531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.267829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.267865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.268101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.268167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.268399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.268515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.268771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.268837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.269163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.269200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.269528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.269595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.269887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.269953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.270255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.270321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.270573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.270609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.270826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.270891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.662 [2024-07-24 19:21:53.271141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.662 [2024-07-24 19:21:53.271205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.662 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.271481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.271548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.271807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.271844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.272065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.272130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.272379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.272464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.272732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.272798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.273071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.273107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.273345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.273410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.273724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.273789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.274048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.274114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.274388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.274425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.274679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.274744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.275024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.275090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.275394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.275516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.275772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.275809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.276029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.276094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.276391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.276478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.276751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.276817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.277119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.277155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.277453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.277520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.277762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.277827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.278108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.278173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.278474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.278518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.278838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.278904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.279124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.279188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.279474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.279541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.279929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.279994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.280272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.280333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.280630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.280698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.280948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.281014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.281294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.281329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.281582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.281649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.281924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.281989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.282255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.282320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.282623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.282660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.282912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.282977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.283302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.283366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.283693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.663 [2024-07-24 19:21:53.283730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.663 qpair failed and we were unable to recover it. 00:29:47.663 [2024-07-24 19:21:53.284069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.284140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.284423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.284509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.284833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.284899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.285186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.285251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.285551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.285589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.285870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.285935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.286241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.286306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.286602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.286668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.286914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.286949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.287146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.287210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.287473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.287541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.287822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.287887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.288163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.288199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.288381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.288475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.288770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.288835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.289100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.289164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.289440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.289477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.289777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.289842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.290121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.290185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.290506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.290572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.290889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.290925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.291184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.291249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.291525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.291591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.291893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.291959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.292271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.292312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.292648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.292685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.292858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.292924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.293194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.293259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.293553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.293590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.293798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.293863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.294143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.294207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.664 qpair failed and we were unable to recover it. 00:29:47.664 [2024-07-24 19:21:53.294480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.664 [2024-07-24 19:21:53.294547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.294865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.294901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.295173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.295238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.295485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.295553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.295871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.295936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.296220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.296255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.296515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.296582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.296913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.296978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.297229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.297293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.297570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.297608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.297841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.297905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.298193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.298258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.298506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.298574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.298833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.298869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.299134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.299201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.299481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.299517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.299688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.299760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.300032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.300068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.300335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.300400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.300651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.300717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.300988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.301054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.301342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.301378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.301599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.301636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.301829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.301895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.302153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.302218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.302482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.302519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.302755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.302820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.303103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.303167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.303419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.303502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.303811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.303847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.304076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.304141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.304400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.304480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.304751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.304816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.305084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.305128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.666 qpair failed and we were unable to recover it. 00:29:47.666 [2024-07-24 19:21:53.305310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.666 [2024-07-24 19:21:53.305375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.305639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.305705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.305957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.306021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.306270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.306306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.306499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.306566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.306827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.306893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.307172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.307237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.307476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.307512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.307677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.307742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.307968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.308033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.308358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.308422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.308712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.308748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.308952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.309018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.309304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.309371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.309658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.309695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.309938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.309974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.310169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.310235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.310496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.310564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.310826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.310891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.311138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.311174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.311375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.311457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.311731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.311796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.312083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.312148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.312345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.312381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.312602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.312638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.312874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.312940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.313351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.313470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.313852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.313922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.314292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.314360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.314761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.314827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.315112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.315176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.315443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.315479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.315656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.315731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.316070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.316134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.667 [2024-07-24 19:21:53.316516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.667 [2024-07-24 19:21:53.316553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.667 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.316873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.316939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.317237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.317300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.317609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.317645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.317849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.317912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.318290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.318354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.318712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.318780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.319043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.319107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.319454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.319507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.319738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.319774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.320056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.320120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.320443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.320519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.320802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.320866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.321206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.321273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.321595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.321631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.321892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.321955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.322281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.322343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.322665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.322701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.323026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.323089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.323482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.323541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.323843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.323907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.324236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.324305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.324572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.324607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.324813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.324876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.325194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.325258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.325572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.325608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.325898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.325961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.326267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.326605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.326642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.326815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.326850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.327022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.327086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.327414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.327455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.327727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.327817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.328215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.328299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.328631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.328666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.328990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.329055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.329341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.329375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.329559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.668 [2024-07-24 19:21:53.329594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.668 qpair failed and we were unable to recover it. 00:29:47.668 [2024-07-24 19:21:53.329761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.329848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.669 qpair failed and we were unable to recover it. 00:29:47.669 [2024-07-24 19:21:53.330196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.330267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.669 qpair failed and we were unable to recover it. 00:29:47.669 [2024-07-24 19:21:53.330518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.330553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.669 qpair failed and we were unable to recover it. 00:29:47.669 [2024-07-24 19:21:53.330751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.330802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.669 qpair failed and we were unable to recover it. 00:29:47.669 [2024-07-24 19:21:53.331066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.331130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.669 qpair failed and we were unable to recover it. 00:29:47.669 [2024-07-24 19:21:53.331434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.331468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.669 qpair failed and we were unable to recover it. 00:29:47.669 [2024-07-24 19:21:53.331630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.331665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.669 qpair failed and we were unable to recover it. 00:29:47.669 [2024-07-24 19:21:53.331856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.669 [2024-07-24 19:21:53.331918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.332282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.332402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.332762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.332802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.333006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.333041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.333304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.333338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.333543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.333588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.333896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.334004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.334382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.334524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.334808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.334873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.335080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.335162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.335474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.335509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.335776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.335824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.336082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.336119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.336352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.956 [2024-07-24 19:21:53.336386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.956 qpair failed and we were unable to recover it. 00:29:47.956 [2024-07-24 19:21:53.336755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.336790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.337120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.337185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.337475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.337509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.337735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.337768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.338006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.338073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.338351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.338414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.338653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.338703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.339001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.339067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.339388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.339484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.339655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.339688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.339937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.339971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.340197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.340260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.340550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.340584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.340756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.340819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.341125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.341181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.341398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.341463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.341647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.341680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.341916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.341980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.342365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.342458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.342760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.342822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.343130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.343193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.343532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.343567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.343750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.343783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.344007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.344070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.344346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.344379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.344635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.344669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.344845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.344880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.345078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.345142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.345436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.345470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.345648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.345698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.346030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.346065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.346307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.346376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.346560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.346594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.346839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.346903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.347250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.347284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.347514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.347551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.347812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.347845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.348035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.348098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.348362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.348412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.348634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.348688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.348978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.349060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.349340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.349404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.349730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.349764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.349964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.350027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.350306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.350339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.350533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.350569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.350738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.350773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.351106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.351168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.351523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.351558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.351759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.351830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.352164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.352198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.352380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.352458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.352649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.352683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.352818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.352851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.353184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.353219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.353411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.353452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.353718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.353752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.353950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.354013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.354329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.354365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.354699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.354734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.355099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.355163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.355520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.355556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.355780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.355815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.356135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.356199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.356563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.356599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.356832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.356896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.357282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.357346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.357651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.357686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.358009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.358072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.358454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.358512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.358784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.358847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.359208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.359272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.957 [2024-07-24 19:21:53.359613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.957 [2024-07-24 19:21:53.359650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.957 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.359914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.359978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.360364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.360458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.360747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.360810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.361113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.361178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.361445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.361517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.361763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.361820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.362091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.362155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.362484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.362541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.362738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.362801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.363041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.363081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.363343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.363406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.363758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.363822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.364055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.364119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.364395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.364438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.364591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.364637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.364939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.365001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.365283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.365346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.365642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.365678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.365894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.365957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.366295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.366369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.366593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.366629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.366926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.366980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.367277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.367310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.367473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.367507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.367651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.367686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.368099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.368196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.368525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.368560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.368707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.368770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.369039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.369104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.369393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.369436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.369643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.369702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.370011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.370075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.370310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.370374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.370750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.370807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.371027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.371066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.371260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.371318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.371466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.371511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.371720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.371756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.371934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.371969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.372149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.372184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.372375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.372411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.372592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.372630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.372760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.372795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.372987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.373023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.373244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.373280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.373582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.373620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.373887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.373951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.374312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.374376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.374728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.374794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.375106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.375142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.375493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.375529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.375731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.375795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.376101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.376147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.376478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.376514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.376719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.376765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.376999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.377045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.377383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.377439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.377691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.377726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.377880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.377943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.378284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.378349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.378688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.378725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.378926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.378961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.379154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.379199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.379425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.379497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.379699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.379734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.379938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.379973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.380198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.380234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.380479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.380514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.380714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.380749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.380947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.380982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.381201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.381246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.958 qpair failed and we were unable to recover it. 00:29:47.958 [2024-07-24 19:21:53.381517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.958 [2024-07-24 19:21:53.381553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.381769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.381804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.381991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.382025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.382204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.382249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.382503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.382539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.382722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.382794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.383100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.383164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.383492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.383527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.383677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.383712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.383898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.383949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.384259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.384305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.384560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.384597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.384827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.384863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.385114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.385159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.385457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.385510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.385772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.385835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.386172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.386217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.386486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.386521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.386698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.386733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.386877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.386929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.387162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.387225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.387583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.387620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.387831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.387879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.388158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.388222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.388527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.388564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.388863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.388899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.389099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.389134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.389322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.389367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.389701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.389738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.389977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.390020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.390281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.390316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.390490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.390526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.390704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.390738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.390888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.390923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.391139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.391174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.391390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.391425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.391713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.391776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.392094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.392142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.392447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.392483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.392706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.392752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.392976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.393021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.393215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.393293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.393525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.393561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.393749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.393784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.394015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.394079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.394283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.394327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.394553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.394589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.394776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.394820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.394997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.395042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.395312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.395381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.395642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.395678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.395943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.396005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.396268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.396331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.396587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.396622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.396852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.396887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.397080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.397145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.397477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.397539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.397753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.397789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.398006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.398041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.398249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.398295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.398511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.398548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.398744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.398779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.959 qpair failed and we were unable to recover it. 00:29:47.959 [2024-07-24 19:21:53.398967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.959 [2024-07-24 19:21:53.399002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.399252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.399315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.399594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.399630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.399785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.399820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.400037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.400072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.400245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.400290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.400497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.400533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.400744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.400779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.400960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.400996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.401200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.401264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.401501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.401536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.401740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.401775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.401993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.402028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.402260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.402305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.402539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.402574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.402768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.402802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.402971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.403015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.403238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.403283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.403446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.403503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.403693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.403765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.403981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.404017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.404258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.404328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.404551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.404587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.404749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.404785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.404923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.404957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.405185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.405267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.405509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.405544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.405700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.405735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.405984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.406038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.406208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.406253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.406483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.406519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.406710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.406744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.406870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.406904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.407152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.407215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.407449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.407502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.407749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.407784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.407990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.408034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.408341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.408406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.408713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.408747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.408970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.409015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.409279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.409313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.409504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.409540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.409708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.409772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.410041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.410108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.410355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.410390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.410622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.410658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.410944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.410989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.411202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.411247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.411519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.411555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.411709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.411765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.411974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.412019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.412238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.412302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.412591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.412632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.412812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.412875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.413215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.413260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.413456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.413509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.413784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.413843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.414139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.414184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.414494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.414530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.414787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.414822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.414996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.415031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.415321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.415366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.415709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.415756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.416043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.416088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.416378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.416413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.416640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.416676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.416913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.416976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.417282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.417346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.417597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.417632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.417838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.417874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.418132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.418167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.418455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.418512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.960 qpair failed and we were unable to recover it. 00:29:47.960 [2024-07-24 19:21:53.418705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.960 [2024-07-24 19:21:53.418740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.419007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.419043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.419294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.419357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.419638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.419674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.419945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.419981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.420273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.420318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.420631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.420666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.420850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.420885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.421040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.421076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.421234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.421304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.421610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.421646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.421879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.421924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.422198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.422233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.422535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.422570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.422780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.422814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.423154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.423217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.423488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.423525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.423702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.423739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.424040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.424085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.424373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.424419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.424668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.424703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.424945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.424981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.425262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.425326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.425539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.425574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.425815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.425851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.426073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.426118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.426337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.426383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.426642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.426679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.426971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.427007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.427265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.427330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.427677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.427713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.427886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.427921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.428097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.428133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.428340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.428387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.428666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.428701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.428916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.428952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.429141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.429176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.429374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.429409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.429624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.429659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.429914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.429958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.430187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.430223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.430396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.430450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.430602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.430637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.430853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.430898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.431094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.431129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.431342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.431377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.431626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.431661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.432014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.432059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.432262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.432302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.432495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.432531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.432705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.432750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.432944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.432989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.433227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.433262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.433469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.433528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.433753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.433799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.433996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.434074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.434450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.434522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.434809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.434873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.435144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.435189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.435517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.435553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.435690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.435735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.435998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.436043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.436323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.436369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.436582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.436618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.436868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.436903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.437177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.437222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.961 qpair failed and we were unable to recover it. 00:29:47.961 [2024-07-24 19:21:53.437479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.961 [2024-07-24 19:21:53.437515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.437770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.437805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.438118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.438153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.438412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.438481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.438706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.438742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.439001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.439047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.439325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.439360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.439624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.439660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.439941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.440006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.440305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.440382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.440721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.440779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.441135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.441179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.441511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.441547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.441843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.441890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.442204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.442239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.442446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.442481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.442631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.442683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.442913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.442959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.443230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.443275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.443506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.443542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.443690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.443725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.443904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.443940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.444140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.444175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.444386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.444462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.444797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.444833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.445057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.445092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.445272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.445317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.445522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.445558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.445744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.445784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.445988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.446023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.446322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.446386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.446745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.446793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.447037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.447090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.447396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.447450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.447666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.447700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.447933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.447969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.448219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.448291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.448617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.448653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.448949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.448984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.449270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.449325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.449541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.449576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1777238 Killed "${NVMF_APP[@]}" "$@" 00:29:47.962 [2024-07-24 19:21:53.449826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.449862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:47.962 [2024-07-24 19:21:53.450182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.450246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:47.962 [2024-07-24 19:21:53.450517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.450553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:47.962 [2024-07-24 19:21:53.450728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.450764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:47.962 [2024-07-24 19:21:53.450971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.962 [2024-07-24 19:21:53.451006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.451169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.451203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.451404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.451488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.451682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.451716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.451988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.452022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.452274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.452309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.452555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.452590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.452796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.452830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.452976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.453011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.453215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.453251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.453421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.453464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.453653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.453690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.453997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.454033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.454296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.454331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.454589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.454624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.454818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.454852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.455060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.455094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.455349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.455384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.455629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.455664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.455885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.455920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1777789 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:47.962 [2024-07-24 19:21:53.456124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.456161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1777789 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 [2024-07-24 19:21:53.456328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.962 [2024-07-24 19:21:53.456363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.962 qpair failed and we were unable to recover it. 00:29:47.962 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1777789 ']' 00:29:47.962 [2024-07-24 19:21:53.456526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.456561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.963 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.963 [2024-07-24 19:21:53.456759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.456794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.963 [2024-07-24 19:21:53.457015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.963 [2024-07-24 19:21:53.457062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 19:21:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.963 [2024-07-24 19:21:53.457306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.457343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.457511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.457546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.457709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.457743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.457901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.457935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.458126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.458164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.458381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.458462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.458636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.458670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.458867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.458902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.459098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.459133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.459304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.459339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.459505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.459539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.459747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.459781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.459947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.459980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.460160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.460234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.460525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.460561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.460764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.460799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.460976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.461011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.461182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.461227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.461426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.461497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.461632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.461667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.461827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.461862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.462083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.462146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.462387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.462471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.462662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.462699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.462881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.462917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.463128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.463174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.463375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.463420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.463639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.463675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.463857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.463892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.464120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.464184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.464394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.464509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.464656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.464692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.464857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.464891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.465091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.465126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.465343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.465389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.465639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.465675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.465826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.465861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.466056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.466119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.466352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.466415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.466648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.466684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.466853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.466894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.467074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.467127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.467356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.467401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.467619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.467654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.467876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.467912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.468143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.468207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.468504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.468540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.468722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.468759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.468949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.468984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.469216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.469252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.469463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.469522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.469676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.469731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.469975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.470010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.470233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.470295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.470582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.470618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.470789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.470868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.471101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.471136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.471288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.471323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.471491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.471547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.471775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.471820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.472060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.472095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.472310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.472379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.472642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.472677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.472877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.472912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.473087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.473122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.473307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.473352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.963 [2024-07-24 19:21:53.473543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.963 [2024-07-24 19:21:53.473578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.963 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.473782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.473822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.474027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.474063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.474248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.474313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.474583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.474618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.474803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.474838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.475040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.475076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.475233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.475278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.475498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.475534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.475729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.475764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.475951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.475998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.476223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.476287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.476517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.476553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.476719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.476755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.476943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.476979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.477163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.477198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.477401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.477445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.477631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.477666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.477811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.477847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.478059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.478095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.478299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.478362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.478615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.478651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.478854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.478890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.479039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.479074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.479292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.479337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.479572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.479608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.479772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.479807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.479977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.480012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.480213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.480276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.480504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.480540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.480729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.480764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.480894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.480928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.481115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.481170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.481357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.481403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.481657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.481692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.481901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.481935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.482182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.482245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.482460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.482524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.482700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.482735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.482910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.482945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.483118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.483177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.483383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.483436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.483625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.483660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.483800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.483835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.484035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.484070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.484196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.484231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.484407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.484496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.484671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.484706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.484897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.484961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.485162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.485226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.485465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.485501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.485697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.485761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.486007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.486071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.486332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.486396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.486642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.486678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.486842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.486904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.487149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.487213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.487452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.487515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.487720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.487755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.487933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.487996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.488242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.488305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.488568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.488603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.488791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.488826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.489035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.489099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.489356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.489418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.489612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.964 [2024-07-24 19:21:53.489647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.964 qpair failed and we were unable to recover it. 00:29:47.964 [2024-07-24 19:21:53.489827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.489862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.490049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.490115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.490349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.490412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.490675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.490744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.490987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.491023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.491228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.491292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.491571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.491607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.491834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.491898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.492160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.492195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.492396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.492504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.492650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.492685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.492887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.492950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.493216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.493252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.493397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.493477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.493732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.493797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.494059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.494123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.494371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.494448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.494680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.494716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.494921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.494986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.495232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.495295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.495607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.495644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.495870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.495934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.496179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.496242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.496484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.496548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.496784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.496818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.497037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.497101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.497375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.497450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.497696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.497757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.497998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.498032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.498252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.498316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.498565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.498605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.498830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.498893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.499128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.499163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.499346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.499410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.499648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.499684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.499900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.499962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.500180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.500215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.500498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.500535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.500771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.500834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.501033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.501097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.501330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.501364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.501539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.501574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.501779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.501842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.502063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.502127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.502373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.502408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.502593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.502628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.502854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.502918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.503179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.503242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.503505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.503541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.503729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.503792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.504098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.504161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.504393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.504469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.504700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.504736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.504936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.504999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.505258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.505322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.505561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.505596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.505770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.505805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.506006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.506070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.506311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.506374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.506628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.506664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.506843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.506878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.507052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.507116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.507354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.507419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.507629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.507664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.507866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.507901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.508136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.508200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.508479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.508539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.508742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.508816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.509052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.509088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.509281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.509344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.509633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.509669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.509877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.509942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.965 qpair failed and we were unable to recover it. 00:29:47.965 [2024-07-24 19:21:53.510163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-07-24 19:21:53.510199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.510360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.510424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.510654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.510689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.510921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.510985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.511241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.511277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.511503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.511539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.511678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.511714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.511922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.511985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.512248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.512282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.512456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.512525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.512754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.512818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.513123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.513186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.513451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.513487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.513703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.513766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.514035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.514098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.514355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.514419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.514664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.514699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.514854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.514927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.515132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.515196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.515474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.515510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.515686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.515721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.515883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.515948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.516196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.516258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.516520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.516556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.516765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.516802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.517049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.517112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.517297] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:47.966 [2024-07-24 19:21:53.517381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.517406] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.966 [2024-07-24 19:21:53.517456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.517687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.517721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.518042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.518076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.518349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.518413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.518655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.518690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.518926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.518990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.519241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.519276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.519449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.519521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.519753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.519820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.520027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.520099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.520345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.520380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.520556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.520592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.520762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.520826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.521089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.521152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.521434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.521470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.521644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.521707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.521971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.522034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.522314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.522378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.522660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.522695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.522913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.522977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.523284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.523348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.523583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.523619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.523820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.523855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.524050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.524113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.524346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.524410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.524696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.524751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.525029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.525065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.525235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.525299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.525539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.525576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.525771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.525834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.526069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.526104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.526297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.526361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.526620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.526655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.526887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.526950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.527221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.527257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.527494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.527547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.527762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.527825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.528085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.528149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.966 [2024-07-24 19:21:53.528387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.966 [2024-07-24 19:21:53.528422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.966 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.528601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.528636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.528871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.528934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.529143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.529206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.529470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.529506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.529702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.529764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.529998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.530062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.530337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.530400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.530675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.530710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.530925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.530989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.531251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.531316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.531510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.531545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.531732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.531775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.531974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.532038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.532327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.532391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.532672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.532712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.532994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.533030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.533254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.533318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.533602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.533638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.533849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.533913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.534225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.534260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.534532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.534568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.534791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.534854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.535085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.535148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.535409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.535450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.535723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.535786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.536024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.536087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.536319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.536382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.536661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.536697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.536925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.536988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.537296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.537360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.537628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.537664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.537840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.537875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.538089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.538153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.538413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.538486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.538713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.538791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.539028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.539063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.539315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.539377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.539647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.539683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.539902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.539965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.540179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.540214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.540382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.540459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.540697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.540773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.540977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.541041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.541251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.541286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.541511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.541547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.541805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.541869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.542127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.542191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.542499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.542535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.542759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.542822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.543082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.543145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.543366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.543441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.543692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.543727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.543931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.543994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.544196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.544260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.544504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.544540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.544750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.544786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.545032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.545095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.545328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.545391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.545609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.545644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.545846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.545881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.546086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.546150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.546416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.546490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.546693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.546755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.547016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.547052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.547241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.547305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.547545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.547580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.547761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.547834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.548108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.548143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.548357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.548460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.548698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.548766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.549037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.549101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.549338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.549373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.967 qpair failed and we were unable to recover it. 00:29:47.967 [2024-07-24 19:21:53.549590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.967 [2024-07-24 19:21:53.549625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.549849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.549912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.550180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.550243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.550514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.550550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.550784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.550846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.551106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.551169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.551437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.551511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.551698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.551733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.551909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.551973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.552207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.552271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.552485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.552536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.552715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.552750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.552937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.553002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.553259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.553323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.553587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.553622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.553828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.553863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.554019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.554082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.554283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.554346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.554590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.554625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.554800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.554836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.555032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.555096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.555335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.555397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.555666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.555701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.555913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.555949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.556146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.556208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.556480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.556535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.556695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.556730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.557020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.557054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.557253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.557317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.557577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.557612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.557821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.557885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.558165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.558200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.558414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.558508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.558683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.558751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.558988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.559050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.559320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.559355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.559616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.559651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.559837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.559914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.560132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.560196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.560474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.560511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.560709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.560771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.561013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.561075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.561334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.561397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.561663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.561698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.561904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.561967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.562198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.562261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.562509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.562544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.562694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.562729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.562950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.563013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.563288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.563351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.563623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.563658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.563890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.563925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.564156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.564220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.564481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.564537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.564727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.564796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.565068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.565103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.565300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.565363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.565640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.565676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.565907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.565972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.566283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.566318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.566561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.566597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.566791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.566856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.567085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.567148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.567379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.567414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.567598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.567655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.968 [2024-07-24 19:21:53.567859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.567894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.568083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.568147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.568401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.568442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.568629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.568665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.568868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.568902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.569033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.569066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.569270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.569305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.569510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.968 [2024-07-24 19:21:53.569545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.968 qpair failed and we were unable to recover it. 00:29:47.968 [2024-07-24 19:21:53.569750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.569813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.570055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.570089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.570319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.570354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.570583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.570617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.570832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.570895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.571089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.571159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.571411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.571461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.571631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.571666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.571860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.571920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.572102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.572135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.572386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.572421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.572633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.572687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.572895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.572929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.573130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.573204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.573401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.573441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.573626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.573660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.574032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.574066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.574365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.574441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.574695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.574736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.575029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.575092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.575402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.575477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.575650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.575693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.575988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.576023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.576287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.576350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.576638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.576673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.576860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.576930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.577251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.577286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.577549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.577585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.577826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.577888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.578234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.578298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.578578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.578613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.578812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.578874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.579186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.579250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.579554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.579589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.579840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.579912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.580212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.580277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.580585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.580620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.580842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.580905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.581155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.581191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.581512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.581547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.581781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.581845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.582111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.582174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.582535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.582571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.582782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.582845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.583162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.583224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.583529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.583570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.583834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.583907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.584245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.584310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.584578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.584613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.584768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.584831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.585177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.585249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.585556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.585591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.585857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.585921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.586242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.586305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.586618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.586654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.586927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.586991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.587342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.587404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.587704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.587776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.588092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.588129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.588494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.588555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.588711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.588770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.589089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.589151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.589447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.589483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.969 [2024-07-24 19:21:53.589649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.969 [2024-07-24 19:21:53.589718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.969 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.589986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.590049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.590320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.590384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.590665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.590700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.590959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.591022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.591339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.591403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.591706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.591742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.592045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.592081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.592372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.592446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.592754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.592817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.593179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.593242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.593532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.593568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.593810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.593874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.594175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.594238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.594541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.594578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.594804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.594838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.595165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.595228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.595538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.595573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.595808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.595871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.596146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.596181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.596366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.596439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.596733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.596799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.597154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.597218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.597572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.597608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.597862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.597925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.598238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.598301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.598633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.598669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.598846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.598887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.599137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.599200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.599449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.599519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.599749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.599812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.600110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.600144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.600365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.600440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.600784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.600863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.601170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.601232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.601588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.601626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.601844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.601907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.602272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.602336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.602671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.602707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.603003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.603039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.603281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.603345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.603713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.603791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.604053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.604117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.604419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.604460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.604612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.604647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.604889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.604952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.605239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.605302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.605625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.605660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.605932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.605996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.606333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.606397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.606771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.606843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.607230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.607294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.607562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.607599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.607873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.607937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.608249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.608312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.608611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.608645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.608904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.608967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.609327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.609390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.609742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.609805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.610089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.610125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.610389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.610466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.610757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.610820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.611143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.611208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.611517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.611553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.611818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.611881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.612140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.612204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.612559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.612595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.612855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.612924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.613243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.613307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.613603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.613638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.613808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.613882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.614210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.614281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.614555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.970 [2024-07-24 19:21:53.614591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.970 qpair failed and we were unable to recover it. 00:29:47.970 [2024-07-24 19:21:53.614876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.614939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.615275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.615338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.615649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.615685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.615997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.616060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.616407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.616510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.616751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.616820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.617157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.617227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.617518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.617554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.617759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.617824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.618072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.618134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.618393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.618434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.618743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.618807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.619108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.619171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.619473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.619534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.619800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.619872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.620195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.620259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.620586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.620621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.620811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.620874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.621281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.621345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.621657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.621693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.622031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.622094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.622449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.622511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.622765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.622827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.623101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.623165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.623511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.623546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.623767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.623830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.624155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.624210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.624462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.624525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.624770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.624833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.625110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.625174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.625505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.625541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.625703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.625752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.626070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.626133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.626347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.626392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.626658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.626694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.627039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.627092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.627367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.627414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.627674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.627710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.628013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.628048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.628365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.628410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.628724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.628760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.629006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.629052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.629248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.629283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.629487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.629523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.629730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.629766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.630004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.630065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.630358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.630397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.630659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.630698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.630912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.630955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.631181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.631228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.631518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.631562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.631770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.631806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.631918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.971 [2024-07-24 19:21:53.632012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.632089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.632362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.632442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.632727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.632765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.632993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.633028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.633221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.633282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.633545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.633583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.633767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.633803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.633999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.634071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.634283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.634326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.634571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.634609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.634843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.634887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.635107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.635142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.635380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.635441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.635645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.971 [2024-07-24 19:21:53.635680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.971 qpair failed and we were unable to recover it. 00:29:47.971 [2024-07-24 19:21:53.635893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.972 [2024-07-24 19:21:53.635928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.972 qpair failed and we were unable to recover it. 00:29:47.972 [2024-07-24 19:21:53.636151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.972 [2024-07-24 19:21:53.636216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.972 qpair failed and we were unable to recover it. 00:29:47.972 [2024-07-24 19:21:53.636527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.972 [2024-07-24 19:21:53.636564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.972 qpair failed and we were unable to recover it. 00:29:47.972 [2024-07-24 19:21:53.636764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.972 [2024-07-24 19:21:53.636803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.972 qpair failed and we were unable to recover it. 00:29:47.972 [2024-07-24 19:21:53.636976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.972 [2024-07-24 19:21:53.637013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:47.972 qpair failed and we were unable to recover it. 00:29:48.245 [2024-07-24 19:21:53.637199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.245 [2024-07-24 19:21:53.637265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.245 qpair failed and we were unable to recover it. 00:29:48.245 [2024-07-24 19:21:53.637536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.245 [2024-07-24 19:21:53.637573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.245 qpair failed and we were unable to recover it. 00:29:48.245 [2024-07-24 19:21:53.637783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.637867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.638161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.638198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.638448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.638509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.638749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.638802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.639090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.639139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.639339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.639376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.639669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.639707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.639875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.639939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.640232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.640268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.640544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.640581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.640820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.640856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.641101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.641138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.641346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.641405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.641639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.641674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.641833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.641870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.642102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.642138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.642482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.642520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.642712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.642749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.642906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.642954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.643230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.643278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.643510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.643547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.643725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.643776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.644061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.644130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.644418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.644521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.644768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.644806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.645017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.645056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.645327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.645374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.645631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.645670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.645895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.645940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.646133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.646169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.646419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.646499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.646740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.646777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.647003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.647051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.647324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.647367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.647611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.647647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.647862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.647919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.246 [2024-07-24 19:21:53.648204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.246 [2024-07-24 19:21:53.648271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.246 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.648563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.648602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.648809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.648858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.649093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.649140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.649337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.649392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.649663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.649700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.649917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.649986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.650262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.650327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.650641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.650679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.650891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.650930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.651159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.651206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.651403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.651484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.651733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.651771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.651947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.651984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.652183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.652249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.652541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.652602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.652869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.652907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.653134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.653172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.653407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.653470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.653777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.653844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.654152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.654222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.654472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.654509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.654739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.654797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.655056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.655103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.655297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.655346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.655542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.655578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.655773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.655828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.656099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.656165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.656467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.656543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.656838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.656875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.657182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.657231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.657489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.657537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.657802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.657850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.658073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.658114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.658346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.658384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.658600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.658638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.658874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.658941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.247 [2024-07-24 19:21:53.659209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.247 [2024-07-24 19:21:53.659247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.247 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.659418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.659468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.659690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.659743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.660067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.660119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.660345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.660383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.660588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.660633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.660960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.661026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.661316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.661365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.661595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.661631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.661830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.661878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.662093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.662140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.662390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.662452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.662766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.662820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.663085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.663152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.663445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.663513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.663691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.663726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.664016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.664070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.664246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.664294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.664508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.664558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.664839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.664906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.665233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.665306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.665580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.665659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.665944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.665993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.666163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.666220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.666504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.666541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.666753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.666808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.667028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.667094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.667335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.667394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.667614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.667650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.667807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.667848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.668052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.668091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.668302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.668365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.668696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.668737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.668968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.669005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.669201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.669276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.669580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.669623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.669860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.669913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.670099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.248 [2024-07-24 19:21:53.670147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.248 qpair failed and we were unable to recover it. 00:29:48.248 [2024-07-24 19:21:53.670402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.670476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.670710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.670758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.671021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.671064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.671357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.671421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.671685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.671743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.672015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.672062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.672261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.672297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.672471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.672529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.672733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.672801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.673117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.673182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.673478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.673515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.673719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.673807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.674066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.674114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.674284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.674341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.674549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.674585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.674777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.674826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.675098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.675165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.675481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.675520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.675768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.675803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.676088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.676155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.676361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.676408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.676643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.676698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.676939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.676983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.677161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.677198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.677384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.677438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.677654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.677688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.677946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.677982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.678181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.678217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.678444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.678508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.678713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.678750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.679014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.679051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.679262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.679309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.679509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.679545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.679733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.679811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.680060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.680106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.680290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.680344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.249 [2024-07-24 19:21:53.680565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.249 [2024-07-24 19:21:53.680601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.249 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.680803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.680851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.681098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.681134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.681334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.681381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.681601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.681637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.681807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.681879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.682186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.682223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.682534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.682570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.682761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.682812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.683203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.683250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.683550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.683586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.683746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.683782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.684088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.684154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.684488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.684524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.684715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.684751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.684929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.684965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.685208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.685260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.685517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.685554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.685728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.685764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.686057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.686126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.686421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.686531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.686707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.686766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.687120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.687167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.687394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.687473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.687698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.687746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.688050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.688130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.688413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.688461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.688642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.688699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.688953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.250 [2024-07-24 19:21:53.689000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.250 qpair failed and we were unable to recover it. 00:29:48.250 [2024-07-24 19:21:53.689231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.689278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.689602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.689645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.689985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.690024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.690208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.690274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.690579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.690621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.690855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.690891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.691178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.691224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.691496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.691562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.691741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.691814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.692191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.692269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.692637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.692673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.692981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.693028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.693304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.693350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.693611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.693648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.693918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.693954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.694188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.694235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.694479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.694515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.694733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.694768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.695052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.695119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.695342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.695411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.695662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.695717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.695993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.696030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.696294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.696342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.696601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.696649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.696944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.697015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.697345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.697381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.697655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.697712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.697917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.697963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.698147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.698194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.698411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.698459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.698723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.698771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.699017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.699097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.699493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.699562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.699882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.699956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.700223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.700289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.700539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.700606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.700930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.251 [2024-07-24 19:21:53.700995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.251 qpair failed and we were unable to recover it. 00:29:48.251 [2024-07-24 19:21:53.701298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.701333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.701587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.701653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.702001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.702067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.702448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.702516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.702753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.702789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.703084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.703149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.703390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.703476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.703785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.703849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.704153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.704190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.704519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.704585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.704820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.704886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.705120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.705185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.705448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.705489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.705667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.705734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.706052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.706116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.706377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.706455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.706678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.706712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.706951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.707015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.707314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.707379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.707685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.707747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.708072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.708108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.708361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.708426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.708767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.708832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.709110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.709176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.709463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.709501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.709722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.709787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.710077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.710142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.710392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.710480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.710668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.710703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.710943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.711008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.711238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.711302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.711597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.711664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.711975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.712011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.712312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.712379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.712668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.712733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.713042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.713108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.252 [2024-07-24 19:21:53.713414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.252 [2024-07-24 19:21:53.713462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.252 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.713744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.713808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.714085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.714150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.714470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.714506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.714734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.714769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.715019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.715084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.715341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.715405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.715715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.715782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.716049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.716086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.716329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.716394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.716729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.716796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.717088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.717155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.717447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.717484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.717732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.717798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.718072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.718137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.718373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.718456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.718742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.718783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.719130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.719195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.719507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.719574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.719854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.719920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.720205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.720240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.720498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.720565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.720762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.720827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.721092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.721157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.721378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.721414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.721611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.721676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.721899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.721965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.722225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.722291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.722527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.722562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.722757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.722822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.723083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.723149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.723411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.723492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.723745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.723781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.723965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.724030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.724283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.724348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.724579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.724616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.253 qpair failed and we were unable to recover it. 00:29:48.253 [2024-07-24 19:21:53.724781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.253 [2024-07-24 19:21:53.724817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.724988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.725053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.725299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.725364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.725622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.725659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.725822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.725857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.726024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.726091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.726338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.726403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.726644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.726678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.726844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.726880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.727062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.727127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.727385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.727475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.727734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.727799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.727996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.728031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.728209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.728274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.728503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.728570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.728820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.728885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.729146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.729181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.729361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.729438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.729667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.729732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.729965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.730031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.730264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.730304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.730505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.730540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.730702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.730759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.731016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.731080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.731281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.731316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.731528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.731595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.731849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.731914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.732135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.732201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.732439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.732474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.732647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.732712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.732962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.733028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.733263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.733327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.733582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.733618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.733823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.733888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.734145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.254 [2024-07-24 19:21:53.734211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.254 qpair failed and we were unable to recover it. 00:29:48.254 [2024-07-24 19:21:53.734480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.734515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.734642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.734676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.734893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.734957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.735262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.735327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.735675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.735711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.735867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.735910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.736100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.736165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.736367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.736445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.736756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.736821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.737105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.737141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.737445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.737511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.737787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.737851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.738163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.738228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.738514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.738550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.738775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.738840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.739145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.739210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.739481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.739546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.739821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.739856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.740036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.740101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.740455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.740520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.740813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.740878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.741083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.741119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.741309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.741374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.741623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.741689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.741974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.742040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.742307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.742353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.742578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.742613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.742825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.742889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.743171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.743236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.743549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.255 [2024-07-24 19:21:53.743585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.255 qpair failed and we were unable to recover it. 00:29:48.255 [2024-07-24 19:21:53.743883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.743947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.744260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.744325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.744644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.744680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.744897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.744932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.745195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.745259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.745552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.745618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.745906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.745971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.746248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.746284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.746504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.746539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.746793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.746856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.747127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.747190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.747457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.747492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.747728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.747792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.748094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.748158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.748475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.748539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.748848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.748884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.749205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.749270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.749515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.749580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.749882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.749946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.750255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.750291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.750571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.750605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.750814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.750879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.751164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.751228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.751502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.751538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.751782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.751847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.752126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.752191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.752492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.752557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.752859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.752894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.753178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.753244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.753507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.753573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.753899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.753963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.754293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.754367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.754713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.754783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.755083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.755147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.755410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.256 [2024-07-24 19:21:53.755489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.256 qpair failed and we were unable to recover it. 00:29:48.256 [2024-07-24 19:21:53.755799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.755839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.756132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.756197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.756475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.756544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.756777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.756842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.757167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.757222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.757492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.757557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.757870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.757935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.758320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.758385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.758750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.758815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.759103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.759167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.759487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.759554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.759855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.759919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.760160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.760197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.760398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.760480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.760777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.760842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.761140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.761206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.761554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.761628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.761986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.762051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.762341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.762406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.762691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.762724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.763025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.763060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.763340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.763405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.763694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.763759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.764023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.764088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.764355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.764390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.764689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.764755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.765015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.765081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.765403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.765485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.765796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.765831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.766095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.766159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.766448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.766510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.766688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.766739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.767049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.767085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.767325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.767389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.767747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.767811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.768104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.257 [2024-07-24 19:21:53.768169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.257 qpair failed and we were unable to recover it. 00:29:48.257 [2024-07-24 19:21:53.768483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.768519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.768852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.768917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.769184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.769248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.769517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.769583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.769888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.769929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.770238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.770303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.770575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.770610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.770818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.770882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.771182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.771218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.771499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.771564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.771810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.771874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.772145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.772209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.772528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.772564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.772904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.772968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.773184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.773249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.773571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.773637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.773937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.773972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.774203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.774269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.774599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.774633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.774887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.774952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.775227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.775262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.775557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.775623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.775896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.775960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.776252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.776316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.776607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.776642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.776857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.776922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.777193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.777258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.777577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.777643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.777928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.777964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.778208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.778273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.778582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.778616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.778812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.778877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.779220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.779287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.779560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.779625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.779891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.779966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.258 [2024-07-24 19:21:53.780296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.258 [2024-07-24 19:21:53.780362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.258 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.780634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.780670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.780933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.780997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.781275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.781339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.781620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.781656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.781819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.781855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.782072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.782136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.782451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.782523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.782707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.782764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.783074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.783115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.783466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.783532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.783839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.783903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.784163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.784228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.784514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.784550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.784804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.784868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.785177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.785241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.785551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.785616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.785919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.785955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.786259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.786325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.786646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.786679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.786903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.786968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.787253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.787289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.787531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.787597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.787913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.787978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.788208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.788272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.788595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.788661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.788923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.788988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.789234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.789298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.789587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.789653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.789897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.789933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.790091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.790156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.790454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.790512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.790770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.790835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.791114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.791149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.791372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.791466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.791775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.791840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.792122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.259 [2024-07-24 19:21:53.792187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.259 qpair failed and we were unable to recover it. 00:29:48.259 [2024-07-24 19:21:53.792487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.792523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.792839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.792903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.793221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.793286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.793576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.793642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.793913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.793948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.794108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.794174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.794516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.794550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.794788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.794852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.795088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.795123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.795337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.795402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.795759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.795825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.796097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.796162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.796462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.796503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.796773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.796837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.797093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.797158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.797416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.797496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.797732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.797768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.797992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.798058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.798366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.798445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.798704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.798771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.799027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.799063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.799253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.799317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.799577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.799644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.799908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.799973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.800180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.800214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.800456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.800522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.800858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.800923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.801235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.801300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.801606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.260 [2024-07-24 19:21:53.801642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.260 qpair failed and we were unable to recover it. 00:29:48.260 [2024-07-24 19:21:53.801959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.802024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.802317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.802381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.802719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.802777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.803052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.803087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.803350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.803416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.803724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.803789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.804080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.804145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.804416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.804457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.804657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.804722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.805019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.805083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.805396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.805477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.805766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.805801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.806009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.806073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.806335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.806400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.806700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.806763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.807048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.807083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.807344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.807409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.807744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.807809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.808118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.808182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.808479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.808516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.808811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.808875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.809121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.809186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.809455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.809521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.809797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.809842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.810057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.810123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.810465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.810529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.810721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.810800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.811065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.811100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.811455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.811520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.811766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.811831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.812110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.812175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.812502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.812556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.812884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.812949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.813198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.813263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.813503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.813568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.813875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.813911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.814169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.814234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.261 [2024-07-24 19:21:53.814467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.261 [2024-07-24 19:21:53.814520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.261 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.814776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.814840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.815108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.815144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.815361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.815425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.815792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.815857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.816175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.816238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.816551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.816588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.816923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.816989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.817235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.817299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.817577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.817642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.817953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.817988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.818305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.818369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.818669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.818703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.819005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.819070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.819341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.819376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.819622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.819687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.819961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.820025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.820375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.820411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.820624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.820660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.820889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.820954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.821253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.821317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.821632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.821698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.822029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.822083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.822394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.822488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.822702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.822762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.823066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.823130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.823426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.823482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.823792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.823858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.824181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.824245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.824549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.824615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.824905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.824940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.825230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.825294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.825581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.825646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.825944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.826009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.826242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.826277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.826455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.826511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.826672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.826705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.826972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.262 [2024-07-24 19:21:53.827036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.262 qpair failed and we were unable to recover it. 00:29:48.262 [2024-07-24 19:21:53.827345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.827380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.827681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.827746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.828013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.828077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.828399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.828479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.828778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.828813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.829043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.829106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.829406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.829486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.829800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.829864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.830159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.830194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.830459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.830514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.830715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.830788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.831068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.831134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.831416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.831466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.831780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.831844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.832145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.832209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.832490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.832555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.832848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.832883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.833085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.833149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.833460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.833523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.833800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.833865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.834130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.834166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.834334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.834369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.834548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.834582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.834779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.834813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.835042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.835075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.835290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.835323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.835534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.835568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.835721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.835755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.835916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.835954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.836143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.836178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.836410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.836487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.836756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.263 [2024-07-24 19:21:53.836811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.836837] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.263 [2024-07-24 19:21:53.836874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the[2024-07-24 19:21:53.836874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b9 only 00:29:48.263 0 with addr=10.0.0.2, port=4420 00:29:48.263 [2024-07-24 19:21:53.836910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.836937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.263 [2024-07-24 19:21:53.837100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:48.263 [2024-07-24 19:21:53.837187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.837222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.837177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:48.263 [2024-07-24 19:21:53.837399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.837442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.263 [2024-07-24 19:21:53.837402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:48.263 [2024-07-24 19:21:53.837408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:48.263 [2024-07-24 19:21:53.837643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.263 [2024-07-24 19:21:53.837677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.263 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.837810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.837844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.838024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.838058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.838221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.838255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.838418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.838469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.838690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.838725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.838896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.838930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.839129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.839162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.839391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.839425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.839639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.839672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.839848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.839881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.840090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.840123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.840358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.840391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.840585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.840620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.840793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.840827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.840997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.841030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.841202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.841235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.841446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.841482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.841697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.841731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.841932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.841965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.842169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.842203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.842377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.842410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.842614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.842648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.842892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.842926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.843114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.843148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.843323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.843356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.843519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.843553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.843736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.843770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.843909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.843943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.844116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.844150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.844324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.844358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.844548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.844588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.844759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.844792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.844972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.845005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.845175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.845209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.845377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.845410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.845568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.845602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.845796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.845830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.264 qpair failed and we were unable to recover it. 00:29:48.264 [2024-07-24 19:21:53.846030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.264 [2024-07-24 19:21:53.846064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.846253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.846287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.846476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.846511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.846687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.846722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.846928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.846962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.847188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.847222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.847418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.847458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.847654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.847688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.847893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.847926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.848104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.848138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.848299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.848334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.848494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.848529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.848680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.848713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.848886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.848920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.849080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.849114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.849312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.849346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.849514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.849549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.849717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.849751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.849952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.849985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.850208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.850241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.850405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.850458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.850660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.850694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.850900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.850934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.851133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.851166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.851315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.851349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.851542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.851576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.851764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.851797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.851983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.852022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.852274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.265 [2024-07-24 19:21:53.852308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.265 qpair failed and we were unable to recover it. 00:29:48.265 [2024-07-24 19:21:53.852478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.852512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.852683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.852717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.852885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.852919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.853092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.853125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.853325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.853364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.853583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.853617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.853808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.853842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.854032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.854066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.854233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.854266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.854464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.854498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.854672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.854706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.854916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.854949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.855135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.855171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.855385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.855419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.855634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.855667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.855841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.855875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.856050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.856083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.856256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.856290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.856467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.856502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.856700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.856733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.856938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.856972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.857168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.857202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.857458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.857492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.857693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.857726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.857928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.857961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.858162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.858196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.858398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.858439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.858661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.858694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.858911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.858944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.859083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.859116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.859314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.859347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.859554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.859589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.859792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.859826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.860026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.860060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.860260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.860293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.860543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.860578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.860796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.266 [2024-07-24 19:21:53.860830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.266 qpair failed and we were unable to recover it. 00:29:48.266 [2024-07-24 19:21:53.861029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.861062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.861316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.861350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.861560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.861594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.861768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.861802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.862007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.862041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.862241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.862274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.862446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.862480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.862680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.862723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.862898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.862931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.863103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.863137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.863301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.863335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.863554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.863588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.863802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.863836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.864047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.864080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.864253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.864286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.864491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.864525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.864761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.864795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.864973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.865007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.865217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.865251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.865455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.865489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.865664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.865698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.865905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.865938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.866189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.866222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.866371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.866405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.866588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.866621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.866835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.866869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.867037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.867073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.867250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.867285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.867491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.867525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.867723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.867756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.868006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.868039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.868293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.868326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.868574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.868608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.868863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.868897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.869128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.869162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.869362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.869396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.869673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.869707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.267 qpair failed and we were unable to recover it. 00:29:48.267 [2024-07-24 19:21:53.869950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.267 [2024-07-24 19:21:53.869984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.870261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.870295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.870523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.870557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.870754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.870787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.870970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.871004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.871166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.871205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.871401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.871441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.871701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.871734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.871935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.871968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.872151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.872185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.872339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.872377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.872589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.872624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.872827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.872861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.873022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.873055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.873231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.873264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.873439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.873474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.873659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.873692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.873901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.873935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.874185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.874218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.874445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.874479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.874661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.874694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.874872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.874917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.875112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.875147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.875334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.875371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.875602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.875638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.875848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.875883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.876093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.876128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.876351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.876386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.876586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.876621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.876825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.876859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.877086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.877120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.877315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.877354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.877619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.877656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.877937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.877970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.878187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.878224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.878375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.878409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.878654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.878693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.878916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.268 [2024-07-24 19:21:53.878950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.268 qpair failed and we were unable to recover it. 00:29:48.268 [2024-07-24 19:21:53.879123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.879158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.879392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.879426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.879622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.879658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.879880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.879915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.880124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.880159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.880367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.880401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.880599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.880635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.880841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.880875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.881135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.881171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.881345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.881379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.881601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.881636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.881835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.881877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.882145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.882185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.882394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.882439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.882705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.882745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.882956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.882992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.883178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.883213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.883377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.883420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.883641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.883677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.883883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.883916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.884117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.884151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.884329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.884363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.884538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.884575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.884736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.884771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.884950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.884992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.885197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.885231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.885415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.885478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.885659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.885693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.885895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.885929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.886149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.886183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.886385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.886423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.886710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.886744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.887022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.269 [2024-07-24 19:21:53.887057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.269 qpair failed and we were unable to recover it. 00:29:48.269 [2024-07-24 19:21:53.887256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.887297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.887506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.887540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.887742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.887781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.887991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.888025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.888234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.888268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.888418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.888465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.888658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.888699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.888911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.888945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.889119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.889158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.889334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.889368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.889538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.889573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.889770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.889803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.889975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.890008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.890184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.890220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.890444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.890479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.890679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.890714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.890973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.891008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.891181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.891215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.891437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.891472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.891730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.891771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.891943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.891978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.892196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.892231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.892469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.892505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.892701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.892736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.892893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.892927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.893099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.893140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.893346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.893380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.893601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.893636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.893807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.893847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.894064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.894099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.894354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.894388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.894622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.894664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.894881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.894915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.895102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.895136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.895305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.895339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.895515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.895551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.895755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.895789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.896000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.896035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.896234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.896277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.896481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.896516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.270 [2024-07-24 19:21:53.896726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.270 [2024-07-24 19:21:53.896761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.270 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.896946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.896980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.897173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.897208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.897371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.897405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.897663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.897699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.897916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.897955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.898132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.898167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.898375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.898410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.898621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.898663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.898869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.898905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.899089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.899123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.899292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.899326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.899535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.899572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.899779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.899814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.900019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.900054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.900258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.900295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.900481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.900516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.900726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.900768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.900943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.900982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.901201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.901241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.901505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.901541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.901698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.901732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.901940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.901975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.902237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.902273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.902524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.902559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.902729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.902764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.902935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.902969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.903178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.903212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.903395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.903438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.903706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.903739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.903979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.904015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.904275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.904310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.904451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.904486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.904667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.904703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.904944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.904979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.905135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.905170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.905368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.905409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.905637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.905671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.905881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.905915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.906174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.906209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.906364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.906398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.271 [2024-07-24 19:21:53.906625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.271 [2024-07-24 19:21:53.906661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.271 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.906784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.906818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.906996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.907031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.907219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.907254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.907454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.907489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.907747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.907789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.908007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.908042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.908253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.908287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.908489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.908525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.908733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.908768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.908943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.908984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.909166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.909200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.909403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.909450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.909701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.909734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.909964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.909999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.910199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.910233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.910504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.910539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.910725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.910760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.910961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.910996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.911250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.911285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.911445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.911489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.911703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.911737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.912001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.912036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.912250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.912285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.912461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.912496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.912675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.912710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.912875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.912909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.913120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.913155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.913373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.913413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.913631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.913666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.913840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.913875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.914076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.914111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.914345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.914380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.914557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.914595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.914843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.914877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.915052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.915088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.915217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.915251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.915451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.915486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.915686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.915727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.915911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.915946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.916149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.916183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.916377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.916411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.916629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.272 [2024-07-24 19:21:53.916665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.272 qpair failed and we were unable to recover it. 00:29:48.272 [2024-07-24 19:21:53.916917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.916957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.917164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.917198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.917338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.917386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.917602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.917637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.917891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.917926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.918131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.918165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.918411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.918457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.918668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.918702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.918947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.918984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.919241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.919276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.919479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.919516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.919712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.919747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.919934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.919969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.920146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.920184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.920329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.920363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.920567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.920603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.920843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.920878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.921117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.921152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.921319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.921364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.921556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.921592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.921814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.921851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.922008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.922042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.922227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.922262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.922465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.922499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.922702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.922737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.922947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.922985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.923211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.923246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.923452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.923488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.923616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.923650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.923861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.923899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.924104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.924139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.924352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.924387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.924528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.924564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.924740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.924781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.925008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.925042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.925253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.925288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.925494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.925532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.925743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.925778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.926039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.926073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.273 qpair failed and we were unable to recover it. 00:29:48.273 [2024-07-24 19:21:53.926272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.273 [2024-07-24 19:21:53.926306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.926511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.926547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.926714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.926750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.926973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.927012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.927213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.927248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.927400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.927460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.927664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.927698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.927905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.927941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.928142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.928177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.928424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.928471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.928746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.928789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.929068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.929102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.929312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.929347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.929559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.929594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.929833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.929869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.930034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.930073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.930286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.930320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.930528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.930564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.930741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.930777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.931001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.931035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.931243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.931277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.931493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.931528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.931696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.931731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.931934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.931968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.932205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.932239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.932453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.932488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.932677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.932711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.932907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.932942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.933221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.933259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.933487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.933522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.933675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.933710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.933907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.933940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.934203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.934238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.934493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.934530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.934703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.934737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.934881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.934918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.935093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.935127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.935294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.935329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.935527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.935562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.935816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.935850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.936059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.936094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.936298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.936332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.936534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.936570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.936768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.936808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.937085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.937119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.937336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.937371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.937540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.937576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.937783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.937819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.938038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.938074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.938289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.551 [2024-07-24 19:21:53.938324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.551 qpair failed and we were unable to recover it. 00:29:48.551 [2024-07-24 19:21:53.938529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.938564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.938754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.938788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.938967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.939002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.939170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.939203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.939370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.939405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.939556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.939590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.939786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.939821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.940040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.940074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.940258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.940293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.940469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.940510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.940735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.940770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.940953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.940989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.941163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.941197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.941398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.941451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.941709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.941744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.941989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.942022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.942251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.942286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.942499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.942543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.942730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.942764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.942971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.943005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.943262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.943299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.943525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.943560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.943718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.943753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.943948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.943987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.944218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.944252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.944517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.944553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.944728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.944770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.945031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.945065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.945275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.945310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.945483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.945519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.945738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.945772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.945982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.946023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.946231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.946265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.946490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.946531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.946753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.946788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.946930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.946964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.947148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.947182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.947384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.947425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.947661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.947696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.947841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.947880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.948064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.948098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.948282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.948316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.948515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.948557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.948790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.948825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.948999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.949034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.949232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.949269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.949482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.949517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.949727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.949761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.949961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.949995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.950205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.950238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.950399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.950451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.950659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.950692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.950911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.950945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.951147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.951180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.951383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.951417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.951654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.951688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.951885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.951919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.952163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.952197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.952398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.952441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.952654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.952688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.952930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.952964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.953166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.953199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.953410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.953455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.953660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.953694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.953865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.953898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.954104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.954138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.954296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.954330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.954488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.954534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.954709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.552 [2024-07-24 19:21:53.954742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.552 qpair failed and we were unable to recover it. 00:29:48.552 [2024-07-24 19:21:53.954939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.954972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.955102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.955135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.955334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.955367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.955584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.955620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.955802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.955841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.956045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.956079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.956273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.956306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.956522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.956556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.956756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.956789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.956954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.956987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.957183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.957217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.957486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.957521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.957722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.957756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.957985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.958019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.958193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.958227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.958426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.958470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.958647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.958680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.958856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.958890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.959103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.959137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.959336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.959370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.959554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.959589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.959755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.959789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.960000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.960034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.960257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.960290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.960438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.960473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.960675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.960709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.960918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.960952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.961148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.961182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.961424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.961479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.961694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.961727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.961931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.961965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.962240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.962274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.962481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.962516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.962682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.962715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.962889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.962922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.963106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.963139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.963316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.963350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.963538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.963572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.963745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.963779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.963983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.964017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.964265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.964298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.964520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.964554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.964755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.964789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.965039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.965072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.965273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.965312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.965516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.965551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.965773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.965807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.965979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.966013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.966187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.966220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.966418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.966462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.966711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.966745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.966924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.966958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.967158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.967191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.967463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.967497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.967730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.967764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.967964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.967998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.968196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.968229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.968401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.968444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.968728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.968762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.968941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.553 [2024-07-24 19:21:53.968974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.553 qpair failed and we were unable to recover it. 00:29:48.553 [2024-07-24 19:21:53.969103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.969136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.969293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.969327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.969484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.969518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.969718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.969752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.969949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.969983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.970154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.970187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.970388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.970421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.970682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.970716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.970897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.970930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.971099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.971132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.971269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.971303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.971511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.971546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.971740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.971774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.971943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.971977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.972164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.972198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.972362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.972396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.972561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.972595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.972814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.972848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.973037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.973070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.973240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.973273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.973461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.973496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.973693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.973726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.973898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.973932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.974135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.974168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.974374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.974412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.974600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.974634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.974838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.974872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.975054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.975087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.975274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.975313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.975540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.975574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.975782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.975816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.976024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.976057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.976275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.976308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.976559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.976594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.976787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.976821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.977022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.977055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.977255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.977296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.977502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.977536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.977754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.977787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.977982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.978015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.978272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.978305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.978546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.978580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.978767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.978801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.978959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.978992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.979153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.979186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.979313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.979346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.979518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.979552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.979751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.979784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.980008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.980042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.980224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.980257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.980397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.980436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.980610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.980643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.980843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.980876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.981109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.981142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.981354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.981387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.981649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.981684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.981874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.981906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.982110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.982143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.982344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.982378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.982581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.982615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.982823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.982856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.983059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.983093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.983351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.983385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.983591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.983625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.983823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.983861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.984071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.984104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.984315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.984348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.984549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.984583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.554 [2024-07-24 19:21:53.984811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.554 [2024-07-24 19:21:53.984845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.554 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.985050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.985084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.985285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.985318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.985526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.985560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.985782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.985816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.986013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.986046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.986260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.986293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.986491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.986525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.986786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.986819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.987015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.987048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.987230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.987263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.987474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.987508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.987675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.987709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.987904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.987937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.988181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.988215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.988414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.988456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.988625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.988659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.988832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.988865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.989063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.989097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.989346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.989379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.989651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.989685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.989900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.989934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.990139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.990172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.990421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.990463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.990612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.990646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.990841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.990875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.991040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.991074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.991258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.991292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.991495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.991529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.991727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.991761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.991968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.992002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.992207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.992240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.992468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.992502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.992710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.992744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.992947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.992980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.993187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.993221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.993384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.993434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.993652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.993685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.993902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.993935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.994144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.994177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.994373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.994407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.994585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.994618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.994792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.994825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.995033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.995066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.995314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.995347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.995613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.995647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.995803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.995837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.995992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.996034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.996242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.996276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.996465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.996499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.996718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.996751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.996990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.997024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.997231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.997264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.997473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.997507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.997678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.997711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.997910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.997944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.998117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.998151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.998323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.998357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.998572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.998607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.998788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.998821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.999022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.999056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.999280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.999313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.999472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.999506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.999688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.999722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:53.999893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:53.999926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:54.000138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:54.000171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:54.000378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:54.000411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:54.000566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:54.000599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:54.000802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:54.000835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.555 [2024-07-24 19:21:54.001036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.555 [2024-07-24 19:21:54.001070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.555 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.001269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.001302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.001501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.001535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.001732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.001766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.002017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.002051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.002212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.002245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.002417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.002459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.002614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.002652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.002857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.002890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.003126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.003159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.003422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.003463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.003717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.003751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.003923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.003957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.004157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.004190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.004388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.004421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.004611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.004647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.004852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.004885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.005055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.005088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.005234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.005267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.005442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.005477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.005675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.005709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.005901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.005935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.006133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.006166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.006374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.006408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.006635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.006668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.006811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.006844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.007043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.007077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.007275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.007308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.007482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.007517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.007717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.007750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.007958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.007992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.008209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.008243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.008382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.008414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.008619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.008653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.008950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.009004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.009285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.009319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.009506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.009547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.009747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.009780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.009995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.010029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.010254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.010287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.010500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.010536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.010784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.010817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.011022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.011055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.011342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.011376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.011688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.011723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.011945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.011978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.012176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.012209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.012449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.012483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.012830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.012881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.013157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.013192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.013464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.013498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.013678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.013711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.013952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.013986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.014119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.014152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.014361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.014394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.014551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.014584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.014764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.014798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.015083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.015115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.015341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.015374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.015556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.015590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.015815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.015849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.016079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.016119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.016338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.556 [2024-07-24 19:21:54.016371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.556 qpair failed and we were unable to recover it. 00:29:48.556 [2024-07-24 19:21:54.016614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.016647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.016865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.016899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.017112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.017145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.017393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.017426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.017674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.017707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.018005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.018039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.018317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.018349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.018545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.018579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.018848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.018881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.019121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.019154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.019322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.019356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.019574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.019608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.019838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.019872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.020079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.020113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.020387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.020419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.020701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.020735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.020944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.020977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.021170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.021203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.021373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.021406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.021634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.021668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.021921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.021954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.022159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.022192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.022453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.022487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.022706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.022739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.022973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.023006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.023216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.023255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.023510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.023544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.023726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.023759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.023951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.023984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.024114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.024148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.024302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.024334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.024543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.024577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.024849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.024882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.025150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.025184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.025476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.025509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.025733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.025767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.025947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.025979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.026144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.026178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.026395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.026441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.026692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.026726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.026891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.026924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.027063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.027097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.027267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.027300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.027533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.027566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.027758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.027790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.027986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.028020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.028293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.028326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.028580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.028613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.028825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.028858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.029087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.029120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.029321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.029354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.029576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.029609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.029793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.029839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.030100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.030134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.030340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.030372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.030614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.030648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.030817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.030858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.031094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.031128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.031376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.031409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.031745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.031796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.032080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.032115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.032436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.032472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.032664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.032697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.032855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.032888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.033076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.033109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.033306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.033341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.033508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.033542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.033727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.033760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.033988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.034021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.034248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.034282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.557 [2024-07-24 19:21:54.034523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.557 [2024-07-24 19:21:54.034557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.557 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.034752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.034785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.034985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.035017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.035281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.035315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.035523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.035557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.035726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.035759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.035898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.035931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.036164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.036198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.036475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.036509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.036699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.036732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.036983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.037016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.037204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.037237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.037456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.037490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.037650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.037683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.037952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.037984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.038222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.038256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.038483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.038516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.038803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.038835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.039028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.039061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.039311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.039344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.039568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.039603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.039848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.039880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.040171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.040203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2089ea0 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.040261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086b00 (9): Bad file descriptor 00:29:48.558 [2024-07-24 19:21:54.040574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.040629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.040897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.040933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.041163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.041197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.041408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.041461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.041686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.041720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.041897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.041930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.042090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.042124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.042282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.042316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.042501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.042535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.042715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.042748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.042946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.042979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.043225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.043258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.043485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.043519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.043722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.043756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.043939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.043972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.044151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.044185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.044349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.044382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.044515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.044549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.044718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.044751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.044950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.044983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.045187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.045221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.045396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.045437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.045614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.045648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.045819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.045852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.046032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.046065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.046277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.046310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.046487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.046527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.046751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.046784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.046946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.046979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.047164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.047198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.047408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.047449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.047599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.047632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.047795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.047829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.047994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.048027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.048225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.048258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.048467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.048501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.048706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.048739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.048950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.048983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.049190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.049223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.049426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.558 [2024-07-24 19:21:54.049466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.558 qpair failed and we were unable to recover it. 00:29:48.558 [2024-07-24 19:21:54.049609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.049643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.049843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.049876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.050082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.050115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.050322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.050356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.050566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.050600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.050828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.050861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.051057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.051090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.051286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.051319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.051521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.051555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.051750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.051783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.051947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.051981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.052164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.052197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.052374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.052407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.052594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.052628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.052833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.052867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.053070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.053103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.053305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.053338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.053560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.053594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.053778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.053812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.054022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.054055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.054254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.054288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.054469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.054504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.054713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.054747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.054922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.054955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.055136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.055169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.055354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.055387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.055582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.055622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.055808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.055841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.056018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.056052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.056256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.056289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.056507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.056542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.056753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.056786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.056965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.056998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.057198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.057231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.057411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.057451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.057613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.057654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.057852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.057886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.058085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.058118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.058327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.058360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.058527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.058561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.058738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.058772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.058972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.059005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.059209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.059242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.059452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.059486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.059666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.059699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.059871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.059905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.060109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.060142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.060341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.060375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.060552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.060586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.060760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.060793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.060992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.061026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.061216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.061249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.061457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.061491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.061642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.061676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.061875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.061908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.062058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.062091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.062297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.062330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.062500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.062535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.062701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.062735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.062947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.062981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.063189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.063222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.063424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.063463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.063635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.063668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.063802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.063836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.064021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.064054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.064260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.064294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.064484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.064523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.064722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.559 [2024-07-24 19:21:54.064756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.559 qpair failed and we were unable to recover it. 00:29:48.559 [2024-07-24 19:21:54.064944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.064988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.065208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.065242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.065423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.065462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.065660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.065694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.065940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.065974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.066100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.066134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.066301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.066335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.066532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.066566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.066775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.066808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.067008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.067042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.067209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.067243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.067415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.067456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.067597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.067640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.067792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.067825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.068041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.068074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.068259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.068302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.068501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.068535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.068708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.068742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.068878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.068911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.069104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.069137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.069347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.069381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.069514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.069548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.069708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.069742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.069948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.069982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.070177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.070211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.070363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.070397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.070622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.070656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.070827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.070860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.071073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.071106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.071271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.071305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.071491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.071526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.071706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.071739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.071942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.071976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.072166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.072200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.072378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.072412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.072621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.072655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.072856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.072891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.073075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.073108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.073271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.073310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.073505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.073539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.073702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.073747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.073936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.073970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.074153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.074187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.074346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.074380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.074560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.074594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.074809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.074843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.075050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.075084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.075297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.075331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.075541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.075576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.075758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.075959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.075992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.076203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.076237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.076436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.076470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.076615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.076649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.076846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.076879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.077080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.077113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.077323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.077357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.077560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.077594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.077769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.077803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.077985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.078019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.078187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.078221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.078373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.078406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.078612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.078646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.078823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.078856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.079060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.079095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.079232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.079266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.079465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.079499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.079634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.079667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.079898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.560 [2024-07-24 19:21:54.079932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.560 qpair failed and we were unable to recover it. 00:29:48.560 [2024-07-24 19:21:54.080199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.080233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.080402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.080443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.080653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.080687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.080869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.080902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.081114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.081147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.081359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.081393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.081577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.081611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.081883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.081916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.082115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.082148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.082345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.082383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.082566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.082599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.082766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.082799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.083046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.083079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.083255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.083289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.083493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.083528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.083668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.083701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.083870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.083903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.084141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.084174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.084409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.084451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.084637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.084671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.084864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.084898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.085115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.085148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.085374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.085407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.085595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.085630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.085805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.085838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.086011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.086045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.086213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.086247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.086444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.086478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.086629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.086663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.086896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.086929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.087168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.087208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.087409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.087450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.087641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.087675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.087933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.087967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.088137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.088175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.088435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.088469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.088646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.088680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.088941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.088975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.089155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.089188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.089403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.089446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.089598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.089631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.089806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.089840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.090053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.090086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.090289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.090322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.090501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.090535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.090681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.090715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.090897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.090931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.091108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.091142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.091309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.091342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.091516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.091550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.091718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.091754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.091954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.091988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.092157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.092190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.092410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.092459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.092664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.092698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.092864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.092898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.093070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.093104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.093302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.093335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.093534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.093568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.093700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.093733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.093931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.093964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.094139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.561 [2024-07-24 19:21:54.094172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.561 qpair failed and we were unable to recover it. 00:29:48.561 [2024-07-24 19:21:54.094335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.094368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.094591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.094626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.094877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.094910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.095110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.095147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.095351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.095385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.095571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.095605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.095814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.095847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.096017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.096050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.096227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.096261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.096463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.096497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.096633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.096666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.096832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.096865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.097066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.097099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.097315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.097348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.097565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.097605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.097816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.097850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.098080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.098113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.098306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.098339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.098518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.098551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.098723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.098756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.098956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.098989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.099158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.099192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.099401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.099453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.099625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.099658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.099829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.099862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.100036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.100069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.100265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.100298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.100492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.100526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.100714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.100748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.100918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.100952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.101122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.101155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.101334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.101367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.101541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.101574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.101859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.101903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.102173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.102207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.102387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.102420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.102574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.102607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.102786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.102819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.102962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.102995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.103186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.103219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.103420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.103475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.103625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.103658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.103828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.103862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.104067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.104106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.104350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.104384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.104562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.104595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.104777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.104811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.105010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.105044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.105203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.105237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.105411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.105453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.105620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.105654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.105791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.105824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.106021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.106055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.106257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.106290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.106512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.106552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.106724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.106763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.106979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.107012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.107185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.107218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.107404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.107444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.107614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.107647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.107854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.107888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.108035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.108069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.108264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.108297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.108508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.108542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.108693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.108727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.108928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.108961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.109149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.109183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.562 [2024-07-24 19:21:54.109390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.562 [2024-07-24 19:21:54.109423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.562 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.109588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.109623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.109880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.109924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.110108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.110141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.110296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.110330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.110513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.110546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.110741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.110775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.110933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.110967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.111106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.111139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.111342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.111376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.111534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.111567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.111756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.111790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.112026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.112060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.112274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.112307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.112510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.112544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.112757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.112791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.113075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.113108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.113309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.113343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.113525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.113559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.113719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.113752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.113957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.113990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.114194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.114228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.114399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.114442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.114628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.114662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.114867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.114900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.115069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.115102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.115236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.115270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.115451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.115491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.115631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.115665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.115870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.115904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.116105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.116138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.116333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.116367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.116539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.116573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.116735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.116768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.116958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.116991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.117203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.117237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.117441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.117486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.117622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.117655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.117842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.117881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.118110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.118143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.118309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.118342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.118538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.118572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.118751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.118784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.119009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.119043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.119242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.119279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.119480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.119514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.119651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.119684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.119811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.119844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.120032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.120066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.120277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.120322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.120510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.120544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.120684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.120718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.120851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.120885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.121058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.121091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.121285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.121331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.121523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.121557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.121695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.121728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.121884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.121917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.122119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.122157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.122324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.122358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.122526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.122560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.122715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.122748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.122913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.122947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.123146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.123180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.123380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.123413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.123568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.123602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.123755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.123789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.563 qpair failed and we were unable to recover it. 00:29:48.563 [2024-07-24 19:21:54.123984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.563 [2024-07-24 19:21:54.124023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.124206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.124247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.124445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.124479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.124623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.124657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.124855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.124888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.125019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.125052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.125222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.125256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.125418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.125469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.125612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.125645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.125830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.125863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.126102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.126150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.126340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.126385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e10000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.126569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.126620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.126796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.126840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.127023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.127058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.127233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.127274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.127501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.127537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.127668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.127706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.127875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.127909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.128055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.128090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.128298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.128333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.128520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.128562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.128743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.128778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.129009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.129044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.129245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.129278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.129508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.129544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.129715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.129749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.129931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.129978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.130149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.130183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.130357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.130398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.130560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.130596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.130762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.130798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.130952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.130986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.131187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.131224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.131404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.131451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.131593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.131627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.131807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.131841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.132045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.132078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.132248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.132283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.132440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.132479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.132623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.132663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.132861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.132916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.133093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.133131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.133336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.133370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.133544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.133579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.133714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.133758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.133959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.133993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.134218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.134252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.134445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.134482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.134620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.134653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.134792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.134826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.134968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.135005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.135206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.135240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.135410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.135484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.135648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.135684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.135821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.135855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.136034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.136070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.136272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.136307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.136511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.136547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.136675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.136716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.136905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.136940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.137119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.137153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.564 [2024-07-24 19:21:54.137330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.564 [2024-07-24 19:21:54.137365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.564 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.137528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.137563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.137703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.137737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.137903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.137943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.138144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.138177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.138330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.138365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.138528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.138563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.138741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.138776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.139003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.139037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.139241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.139276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.139418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.139473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.139616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.139657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.139843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.139878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.140045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.140089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.140311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.140345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.140526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.140562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.140709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.140743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.140930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.140973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.141197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.141236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.141403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.141467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.141617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.141653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.141864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.141898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.142054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.142090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.142319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.142361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.142529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.142565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.142741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.142777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.142995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.143030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.143167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.143200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.143379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.143416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.143576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.143610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.143794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.143832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.143977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.144011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.144224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.144259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.144462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.144500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.144665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.144710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.144896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.144930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.145211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.145247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.145467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.145506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.145633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.145671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.145845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.145880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.146064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.146105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.146274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.146308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.146498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.146538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.146710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.146743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.146907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.146940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.147118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.147161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.147385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.147419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.147572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.147607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.147816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.147850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.148067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.148101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.148302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.148336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.148534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.148570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.148731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.148765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.148949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.148984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.149206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.149244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.149417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.149480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.149621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.149655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.149840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.149874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.150064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.150104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.150306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.150340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.150528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.150564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.150705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.150747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.150961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.150996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.151137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.151180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.151369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.151403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.151560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.151601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.151808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.151842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.152018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.152052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.565 [2024-07-24 19:21:54.152221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.565 [2024-07-24 19:21:54.152256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.565 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.152441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.152481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.152647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.152693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.152868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.152903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.153082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.153120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.153270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.153303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.153463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.153500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.153710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.153770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.153991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.154026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.154171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.154205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.154351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.154384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.154543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.154577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.154744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.154777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.154996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.155030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.155216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.155248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.155389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.155423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.155602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.155636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.155827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.155860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.156060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.156093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.156248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.156281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.156452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.156489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.156629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.156662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.156875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.156908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.157100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.157136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.157361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.157394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.157561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.157594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.157758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.157791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.157977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.158010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.158277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.158311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.158510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.158547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.158708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.158747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.158925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.158958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.159120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.159154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.159325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.159361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.159561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.159595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.159816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.159849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.160049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.160082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.160257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.160290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.160474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.160509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.160681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.160714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.160893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.160927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.161127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.161161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.161402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.161441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.161608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.161641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.161835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.161871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.162089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.162123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.162304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.162338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.162543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.162576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.162728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.162761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.162932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.162966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.163167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.163200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.163392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.163424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.163598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.163631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.163791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.163824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.164022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.164056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.164232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.164265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.164486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.164531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.164728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.164762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.164935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.164968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.165142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.165175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.165337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.165370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.165563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.566 [2024-07-24 19:21:54.165597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.566 qpair failed and we were unable to recover it. 00:29:48.566 [2024-07-24 19:21:54.165805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.165839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.166032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.166066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.166219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.166252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.166399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.166440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.166580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.166614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.166831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.166863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.167108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.167140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.167317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.167350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.167556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.167596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.167789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.167823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.168018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.168050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.168219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.168265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.168438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.168477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.168637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.168670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.168878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.168911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.169114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.169148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.169355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.169389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.169565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.169599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.169785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.169819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.169982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.170017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.170183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.170216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.170414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.170458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.170619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.170651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.170870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.170903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.171099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.171133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.171301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.171334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.171524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.171558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.171768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.171801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.172025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.172059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.172331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.172364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.172558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.172592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.172746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.172778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.172956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.172992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.173166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.173200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.173404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.173444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.173611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.173662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.173831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.173869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.174040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.174075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.174251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.174286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.174478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.174514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.174691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.174727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.174899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.174934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.175170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.175213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.175412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.175457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.175618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.175660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.175868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.175903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.176120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.176155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.176435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.176471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.176604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.176645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.176819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.176853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.177031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.177077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.177251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.177285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.177462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.177516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.177663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.177698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.177861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.177895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.178091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.178126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.178324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.178359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.178502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.178543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.178765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.178799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.178970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.179005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.179237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.179271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.179502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.179537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.179687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.179720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.179938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.179972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.180193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.180227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.180378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.180414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.180592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.180626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.180800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.180835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.567 [2024-07-24 19:21:54.181048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.567 [2024-07-24 19:21:54.181090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.567 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.181284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.181320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.181510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.181545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.181691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.181730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.181911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.181945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.182151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.182186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.182385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.182420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.182580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.182616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.182826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.182860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.183003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.183041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.183200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.183234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.183401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.183449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.183601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.183635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.183842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.183876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.184082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.184117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.184296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.184330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.184482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.184518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.184650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.184692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.184881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.184916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.185106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.185140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.185309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.185349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.185532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.185567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.185711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.185746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.185970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.186005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.186237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.186287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.186511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.186546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.186702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.186737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.186952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.186986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.187159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.187193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.187384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.187418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.187578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.187612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.187823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.187858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.188067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.188109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.188294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.188339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.188523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.188566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.188760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.188794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.189002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.189039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.189203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.189238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.189408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.189603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.189645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.189815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.189850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.190035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.190081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.190283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.190322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.190473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.190509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.190645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.190679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.190814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.190854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.191025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.191059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.191251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.191286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.191477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.191513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.191687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.191721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.191916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.191950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.192088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.192125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.192340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.192375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.192540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.192575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.192707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.192742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.192913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.192947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.193114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.193148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.193361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.193397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.193566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.193601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.193798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.193833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.194005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.194047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.194247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.194282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.194438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.194475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.194662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.194697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.194906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.194940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.195089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.195124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.195349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.195383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.195546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.195582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.195760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.195794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.568 qpair failed and we were unable to recover it. 00:29:48.568 [2024-07-24 19:21:54.195978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.568 [2024-07-24 19:21:54.196013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.196185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.196223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.196373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.196408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.196600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.196635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.196784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.196818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.197051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.197087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.197388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.197422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.197597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.197632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.197830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.197865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.198043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.198077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.198235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.198269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.198446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.198481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.198641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.198675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.198846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.198881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.199060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.199094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.199286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.199321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.199522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.199564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.199747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.199782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.199995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.200030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.200247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.200281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.200482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.200518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.200664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.200706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.200890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.200924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.201130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.201165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.201380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.201414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.201594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.201629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.201777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.201811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.202044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.202079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.202290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.202324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.202492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.202534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.202684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.202718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.202906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.202947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.203133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.203377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.203412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.203601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.203635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.203843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.203877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.204022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.204056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.204253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.204288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.204498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.204534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.204672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.204714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.204913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.204947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.205101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.205142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.205314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.205348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.205560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.205604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.205805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.205838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.206044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.206086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.206267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.206302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.206504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.206546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.206689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.206724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.206901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.206935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.207160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.207195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.207339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.207373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.207568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.207603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.207790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.207824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.207999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.208040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.208326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.208366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.208574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.208609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.208850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.208886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.209083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.209123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.209317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.209353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.209539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.209574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.209733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.569 [2024-07-24 19:21:54.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.569 qpair failed and we were unable to recover it. 00:29:48.569 [2024-07-24 19:21:54.209994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.210029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.210235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.210275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.210493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.210528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.210673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.210708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.210968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.211002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.211172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.211214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.211459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.211515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.211656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.211692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.211876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.211910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.212133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.212168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.212401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.212445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.212623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.212658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.212822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.212864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.213161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.213195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.213447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.213482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.213656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.213693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.213896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.213931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.214064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.214098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.214364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.214399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.214585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.214620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.214780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.214814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.215062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.215101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.215331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.215366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.215579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.215615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.215901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.215936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.216122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.216165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.216357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.216392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.216572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.216607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.216784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.216818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.216994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.217028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.217257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.217300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.217531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.217566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.217722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.217756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.217989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.218024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.218266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.218309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.218486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.218521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.218706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.218746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.218921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.218957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.219161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.219195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.219359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.219393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.219550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.219585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.219805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.219840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.220040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.220074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.220318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.220353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.220542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.220577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.220721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.220763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.220973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.221007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.221226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.221260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.221576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.221611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.221765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.221799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.222053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.222087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.222274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.222312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.222566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.222608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.222901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.222941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.223221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.223259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.223497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.223533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.223683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.223717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.223914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.223949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.224232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.224265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.224504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.224540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.224717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.224751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.224999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.225034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.225252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.225292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.225514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.225550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.225771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.225805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.226049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.226082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.226312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.226347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.226538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.226573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.570 [2024-07-24 19:21:54.226775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.570 [2024-07-24 19:21:54.226809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.570 qpair failed and we were unable to recover it. 00:29:48.571 [2024-07-24 19:21:54.226973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.571 [2024-07-24 19:21:54.227008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.571 qpair failed and we were unable to recover it. 00:29:48.571 [2024-07-24 19:21:54.227238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.571 [2024-07-24 19:21:54.227273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.571 qpair failed and we were unable to recover it. 00:29:48.571 [2024-07-24 19:21:54.227460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.571 [2024-07-24 19:21:54.227497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.571 qpair failed and we were unable to recover it. 00:29:48.571 [2024-07-24 19:21:54.227670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.571 [2024-07-24 19:21:54.227705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.571 qpair failed and we were unable to recover it. 00:29:48.571 [2024-07-24 19:21:54.227852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.571 [2024-07-24 19:21:54.227886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.571 qpair failed and we were unable to recover it. 00:29:48.571 [2024-07-24 19:21:54.228093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.571 [2024-07-24 19:21:54.228127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.571 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.228365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.228400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.228558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.228599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.228783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.228819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.229096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.229131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.229399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.229444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.229658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.229693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.229919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.229953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.230159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.230199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.230396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.230440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.230648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.230689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.230938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.230972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.231223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.231258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.231473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.852 [2024-07-24 19:21:54.231511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.852 qpair failed and we were unable to recover it. 00:29:48.852 [2024-07-24 19:21:54.231698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.231732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.231938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.231973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.232153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.232196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.232461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.232498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.232634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.232666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.232806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.232839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.233101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.233136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.233381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.233418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.233595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.233630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.233854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.233888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.234141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.234176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.234348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.234382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.234569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.234605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.234861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.234895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.235075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.235110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.235334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.235368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.235541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.235577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.235799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.235834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.236008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.236052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.236300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.236334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.236543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.236586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.236766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.236800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.237070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.237105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.237287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.237322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.237517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.237552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.237718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.237753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.238050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.238083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.238290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.238324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.238509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.238555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.238765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.238799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.239047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.239080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.239383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.239418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.239642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.239681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.239900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.239935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.240169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.240203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.853 qpair failed and we were unable to recover it. 00:29:48.853 [2024-07-24 19:21:54.240461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.853 [2024-07-24 19:21:54.240497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.240726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.240761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.240922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.240967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.241254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.241290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.241555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.241591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.241875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.241910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.242126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.242162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.242411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.242460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.242629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.242663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.242822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.242857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.243112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.243146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.243388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.243423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.243646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.243686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.243943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.243977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.244155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.244188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.244419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.244466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.244653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.244687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.244958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.244994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.245145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.245179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.245391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.245426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.245633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.245667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.245837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.245871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.246035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.246069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.246243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.246276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.246478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.246513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.246747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.246781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.246954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.246987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.247118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.247151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.247319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.247353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.247527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.247562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.247773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.247806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.248006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.248039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.248236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.248269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.248445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.248485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.248661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.248695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.248903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.854 [2024-07-24 19:21:54.248936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.854 qpair failed and we were unable to recover it. 00:29:48.854 [2024-07-24 19:21:54.249111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.249144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.249281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.249314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.249520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.249555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.249800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.249833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.250097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.250131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.250329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.250363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.250536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.250570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.250747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.250781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.251048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.251081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.251266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.251303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.251492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.251527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.251698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.251732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.251940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.251974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.252185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.252219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.252422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.252477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.252723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.252758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.252997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.253030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.253218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.253253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.253459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.253494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.253679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.253716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.253885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.253919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.254116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.254150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.254349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.254383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.254595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.254629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.254809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.254843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.255042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.255076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.255271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.255305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.255512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.255547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.255720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.255753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.255921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.255954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.256122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.256156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.256326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.256360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.256559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.256593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.256804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.256838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.257020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.257054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.257264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.257297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.257503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.257538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.855 qpair failed and we were unable to recover it. 00:29:48.855 [2024-07-24 19:21:54.257725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.855 [2024-07-24 19:21:54.257763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.257966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.257999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.258202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.258236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.258445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.258479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.258657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.258690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.258851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.258884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.259081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.259115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.259368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.259402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.259611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.259645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.259842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.259875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.260047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.260080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.260290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.260323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.260558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.260593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.260803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.260837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.261006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.261040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.261242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.261275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.261538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.261573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.261699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.261733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.261945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.261978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.262166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.262204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.262396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.262437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.262655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.262689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.262869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.262902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.263110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.263143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.263392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.263426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.263658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.263692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.263821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.263855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.264058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.264092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.264329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.264362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.264560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.264596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.264798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.264832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.265031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.265064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.265257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.265291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.265504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.265539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.856 [2024-07-24 19:21:54.265740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.856 [2024-07-24 19:21:54.265774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.856 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.266018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.266051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.266235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.266268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.266462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.266497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.266680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.266714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.266878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.266920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.267106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.267145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.267348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.267381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.267595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.267629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.267832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.267865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.268068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.268102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.268362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.268395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.268608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.268642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.268814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.268848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.269046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.269080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.269314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.269347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.269523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.269558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.269765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.269798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.270072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.270106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.270267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.270311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.270510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.270544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.270751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.270786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.270990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.271023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.271196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.271229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.271432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.271466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.271664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.271698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.271960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.271993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.272247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.272281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.857 [2024-07-24 19:21:54.272528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.857 [2024-07-24 19:21:54.272562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.857 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.272739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.272773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.272955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.272988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.273148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.273182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.273327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.273360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.273575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.273609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.273814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.273848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.274047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.274080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.274306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.274340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.274525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.274563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.274739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.274772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.274934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.274968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.275163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.275197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.275458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.275492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.275726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.275759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.275963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.275997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.276191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.276224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.276484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.276518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.276768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.276808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.277009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.277043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.277213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.277247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.277449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.277484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.277718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.277753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.277888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.277922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.278095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.278129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.278302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.278336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.278511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.278546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.278762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.278796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.279049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.279082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.279321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.279355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.279552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.279586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.279797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.279831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.280044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.280077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.280268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.280301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.280481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.280515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.280714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.280748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.280999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.281033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.281281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.858 [2024-07-24 19:21:54.281314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.858 qpair failed and we were unable to recover it. 00:29:48.858 [2024-07-24 19:21:54.281516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.281551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.281750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.281784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.281988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.282022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.282164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.282198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.282362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.282395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.282572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.282606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.282779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.282812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.282992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.283026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.283222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.283256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.283484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.283519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.283654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.283687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.283858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.283892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.284056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.284089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.284261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.284294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.284466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.284500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.284699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.284732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.284902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.284936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.285110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.285143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.285345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.285378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.285588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.285622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.285789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.285829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.286000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.286033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.286235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.286269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.286491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.286526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.286727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.286760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.286957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.286991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.287186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.287219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.287382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.287416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.287617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.287650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.287902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.287935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.288155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.288189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.288361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.288395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.859 qpair failed and we were unable to recover it. 00:29:48.859 [2024-07-24 19:21:54.288612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.859 [2024-07-24 19:21:54.288646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.288814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.288848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.289051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.289084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.289233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.289266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.289440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.289474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.289600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.289633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.289791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.289824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.289986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.290020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.290219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.290252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.290464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.290499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.290658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.290692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.290883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.290916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.291119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.291152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.291324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.291358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.291557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.291591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.291725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.291759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.291957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.291990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.292199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.292232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.292406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.292446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.292647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.292680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.292855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.292889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.293178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.293212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.293502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.293535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.293698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.293732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.293940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.293974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.294155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.294188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.294389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.294423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.294675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.294709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.294915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.294954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.295153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.295186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.295379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.295412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.295673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.295707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.295912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.295946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.296135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.296169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.296363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.296396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.296588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.296622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.296789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.296822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.860 [2024-07-24 19:21:54.296997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.860 [2024-07-24 19:21:54.297030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.860 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.297202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.297235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.297443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.297477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.297719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.297752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.297932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.297966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.298183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.298217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.298393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.298435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.298607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.298640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.298841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.298874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.299114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.299148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.299320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.299354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.299556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.299590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.299824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.299857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.300082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.300116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.300317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.300350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.300584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.300618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.300770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.300804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.300977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.301010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.301183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.301217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.301379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.301412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.301621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.301655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.301806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.301840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.302009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.302042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.302226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.302260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.302462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.302496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.302667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.302701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.302912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.302945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.303163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.303196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.303371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.303404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.303621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.303654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.303829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.303862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.304051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.304089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.304305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.304338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.304537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.304572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.304833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.304866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.305119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.305152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.305367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.861 [2024-07-24 19:21:54.305400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.861 qpair failed and we were unable to recover it. 00:29:48.861 [2024-07-24 19:21:54.305581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.305615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.305786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.305819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.306016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.306051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.306251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.306284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.306523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.306558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.306732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.306766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.306940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.306972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.307170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.307203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.307387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.307420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.307631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.307664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.307867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.307900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.308096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.308129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.308281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.308314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.308526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.308560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.308762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.308795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.308991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.309025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.309221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.309255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.309509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.309543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.309690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.309723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.309858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.309891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.310092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.310126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.310302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.310336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.310531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.310565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.310740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.310773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.310917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.310950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.311150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.311183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.311437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.311470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.311643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.311676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.311848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.311881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.312088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.312121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.862 qpair failed and we were unable to recover it. 00:29:48.862 [2024-07-24 19:21:54.312292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.862 [2024-07-24 19:21:54.312325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.312520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.312554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.312788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.312822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.313033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.313066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.313269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.313307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.313511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.313545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.313695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.313728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.313944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.313978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.314192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.314225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.314423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.314463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.314637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.314670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.314814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.314847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.315049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.315082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.315284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.315318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.315491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.315538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.315721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.315754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.315957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.315990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.316197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.316230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.316415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.316457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.316665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.316698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.316933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.316966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.317180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.317213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.317411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.317453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.317628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.317661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.317832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.317866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.318029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.318062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.318229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.318262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.318472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.318506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.318719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.318752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.318950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.318984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.319201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.319235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.319442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.319476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.319659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.319693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.319894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.319927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.320158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.320192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.320397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.320448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.320626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.320660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.863 [2024-07-24 19:21:54.320858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.863 [2024-07-24 19:21:54.320891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.863 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.321103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.321135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.321299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.321332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.321499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.321533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.321736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.321769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.321939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.321972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.322135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.322168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.322367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.322406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.322680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.322714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.322886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.322919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.323084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.323117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.323324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.323358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.323567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.323601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.323786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.323819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.324027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.324060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.324301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.324334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.324545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.324578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.324753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.324787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.324987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.325020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.325251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.325284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.325440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.325474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.325649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.325682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.325883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.325916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.326090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.326123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.326291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.326324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.326538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.326573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.326781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.326814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.327014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.327047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.327247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.327280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.327479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.327513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.327675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.327708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.327905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.327939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.328103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.328136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.328295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.328328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.328528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.328570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.328791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.328825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.329049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.329083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.329285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.329318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.864 qpair failed and we were unable to recover it. 00:29:48.864 [2024-07-24 19:21:54.329483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.864 [2024-07-24 19:21:54.329518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.329658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.329691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.329874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.329907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.330107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.330141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.330366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.330399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.330624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.330658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.330868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.330901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.331064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.331097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.331266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.331299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.331462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.331497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.331687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.331720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.331926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.331958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.332158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.332191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.332392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.332425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.332693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.332726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.332945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.332978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.333172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.333205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.333404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.333455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.333629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.333662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.333860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.333893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.334136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.334169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.334363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.334396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.334622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.334655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.334824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.334857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.335027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.335060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.335234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.335268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.335445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.335479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.335666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.335699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.335952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.335986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.336228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.336262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.336507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.336541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.336746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.336780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.337012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.337045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.337220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.337254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.337424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.337463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.865 qpair failed and we were unable to recover it. 00:29:48.865 [2024-07-24 19:21:54.337667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.865 [2024-07-24 19:21:54.337700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.337912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.337950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.338135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.338171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.338360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.338394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.338611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.338646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.338794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.338827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.339036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.339069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.339251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.339285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.339484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.339519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.339709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.339744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.339907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.339941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.340077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.340110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.340321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.340355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.340563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.340597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.340842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.340876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.341132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.341166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.341373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.341407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.341643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.341678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.341855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.341889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.342085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.342118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.342323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.342357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.342557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.342591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.342825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.342858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.343047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.343081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.343241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.343274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.343481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.343515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.343761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.343794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.343995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.344028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.344201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.344234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.344407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.344453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.344592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.344626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.344800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.344834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.345033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.345066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.345318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.345351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.345570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.345604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.345819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.866 [2024-07-24 19:21:54.345853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.866 qpair failed and we were unable to recover it. 00:29:48.866 [2024-07-24 19:21:54.345995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.346029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.346228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.346261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.346503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.346537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.346712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.346745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.346943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.346977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.347151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.347189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.347347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.347380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.347508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.347541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.347746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.347780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.347978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.348012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.348183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.348216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.348393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.348426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.348617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.348651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.348783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.348826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.348988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.349021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.349221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.349254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.349501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.349536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.349711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.349745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.349916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.349950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.350133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.350168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.350339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.350373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.350557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.350592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.350757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.350791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.351009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.351042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.351173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.351206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.351409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.351456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.351660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.351694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.351867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.351900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.352086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.352120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.352338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.352372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.352561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.352595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.352782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.352815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.867 qpair failed and we were unable to recover it. 00:29:48.867 [2024-07-24 19:21:54.353021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.867 [2024-07-24 19:21:54.353055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.353247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.353281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.353487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.353521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.353708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.353751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.353933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.353966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.354168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.354201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.354448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.354482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.354697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.354730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.354916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.354950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.355137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.355175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.355337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.355370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.355562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.355596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.355785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.355825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.356035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.356073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.356284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.356318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.356519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.356553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.356755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.356788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.356986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.357019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.357194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.357227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.357422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.357474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.357625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.357660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.357836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.357869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.358063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.358096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.358308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.358342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.358483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.358518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.358693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.358727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.358904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.358938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.359119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.359153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.359292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.359325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.359522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.359556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.359736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.359770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.359979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.360013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.360234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.360267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.360437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.360471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.868 qpair failed and we were unable to recover it. 00:29:48.868 [2024-07-24 19:21:54.360643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.868 [2024-07-24 19:21:54.360677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.360849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.360883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.361083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.361116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.361359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.361392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.361541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.361575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.361779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.361812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.362005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.362039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.362225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.362267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.362415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.362468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.362643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.362677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.362874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.362908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.363165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.363199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.363451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.363486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.363683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.363716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.363907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.363946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.364165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.364198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.364442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.364476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.364651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.364685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.364889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.364923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.365173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.365212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.365387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.365420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.365599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.365632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.365806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.365841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.366042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.366076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.366276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.366310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.366485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.366519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.366688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.366722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.366918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.366951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.367186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.367219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.367368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.367401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.367581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.367615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.367771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.367804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.367973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.368006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.368178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.368211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.368411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.869 [2024-07-24 19:21:54.368453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.869 qpair failed and we were unable to recover it. 00:29:48.869 [2024-07-24 19:21:54.368656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.368690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.368864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.368897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.369093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.369126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.369370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.369401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.369642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.369674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.369853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.369885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.370037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.370068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.370245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.370278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.370448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.370481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.370652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.370684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.370837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.370873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.371079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.371112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.371376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.371419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.371645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.371679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.371853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.371887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.372090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.372125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.372373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.372407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.372582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.372617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.372817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.372851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.373053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.373090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.373270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.373305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.373503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.373538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.373722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.373757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.373934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.373969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.374137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.374183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.374460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.374495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.374673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.374711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.374856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.374891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.375062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.375096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.375292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.375327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.375519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.375555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.375753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.375790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.376014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.376049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.376219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.376252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.376448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.376484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.376638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.376672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.870 [2024-07-24 19:21:54.376850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.870 [2024-07-24 19:21:54.376886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.870 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.377095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.377129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.377319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.377354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.377552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.377587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.377800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.377834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.378040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.378075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.378336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.378370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.378606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.378641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.378843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.378877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.379060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.379095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.379275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.379309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.379491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.379532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.379741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.379776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.379979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.380014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.380245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.380280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.380457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.380493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.380659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.380698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.380881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.380917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.381129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.381168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.381371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.381405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.381615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.381655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.381792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.381828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.381987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.382021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.382181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.382217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.382382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.382415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.382592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.382626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.382812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.382847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.383012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.383045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.383215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.383263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.383485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.383520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.383664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.383697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.383843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.383878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.384086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.384121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.384367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.384402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.871 [2024-07-24 19:21:54.384579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.871 [2024-07-24 19:21:54.384614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.871 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.384817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.384858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.385019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.385061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.385209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.385241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.385449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.385493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.385644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.385678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.385877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.385912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.386041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.386075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.386272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.386307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.386507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.386545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.386771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.386804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.386976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.387010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.387195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.387231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.387406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.387452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.387595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.387634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.387812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.387845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.388019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.388053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.388233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.388275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.388503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.388538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.388725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.388760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.388959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.388993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.389217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.389253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.389509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.389545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.389726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.389761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.389950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.389984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.390182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.390216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.390407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.390456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.390616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.390650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.390857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.390892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.391125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.391158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.391357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.391391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.391604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.391640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.391826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.872 [2024-07-24 19:21:54.391860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.872 qpair failed and we were unable to recover it. 00:29:48.872 [2024-07-24 19:21:54.392032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.392067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.392259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.392304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.392531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.392567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.392711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.392745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.392991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.393032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.393286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.393320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.393519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.393561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.393741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.393776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.393981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.394015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.394189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.394224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.394436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.394471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.394729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.394770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.394936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.394972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.395145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.395178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.395390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.395452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.395686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.395723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.395870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.395908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.396072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.396107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.396258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.396291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.396462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.396497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.396667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.396701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.396902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.396938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.397152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.397186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.397383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.397417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.397592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.397625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.873 qpair failed and we were unable to recover it. 00:29:48.873 [2024-07-24 19:21:54.397820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.873 [2024-07-24 19:21:54.397854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.398059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.398093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.398272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.398304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.398468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.398502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.398701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.398734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.398903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.398936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.399103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.399140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.399334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.399367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.399569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.399602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.399803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.399836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.400063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.400095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.400278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.400312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.400503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.400537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.400749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.400782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.400982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.401015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.401266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.401312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.401515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.401554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.401760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.401793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.401976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.402009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.402217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.402250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.402447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.402484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.402668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.402701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.402905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.402938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.403113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.403146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.403315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.403348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.403526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.403560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.403736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.403772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.403913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.403947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.404113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.404146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.404270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.404302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.404479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.404514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.404722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.404756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.404995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.405028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.405198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.874 [2024-07-24 19:21:54.405230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.874 qpair failed and we were unable to recover it. 00:29:48.874 [2024-07-24 19:21:54.405397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.405437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.405615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.405649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.405810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.405844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.405980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.406020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.406188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.406221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.406423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.406462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.406637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.406671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.406849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.406882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.407035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.407066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.407269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.407320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.407516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.407552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.407758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.407800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.408030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.408065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.408248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.408287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.408461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.408506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.408682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.408716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.408927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.408961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.409162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.409196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.409378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.409413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.409606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.409640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.409803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.409837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.410018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.410053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.410221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.410263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.410463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.410499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.410677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.410710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.410906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.410941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.411144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.411179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.411379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.411412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.411634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.411669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.411837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.411872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.412076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.412114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.412295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.412328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.412542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.412578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.412747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.412780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.412991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.413032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.413244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.413279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.413592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.413627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.413848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.875 [2024-07-24 19:21:54.413882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.875 qpair failed and we were unable to recover it. 00:29:48.875 [2024-07-24 19:21:54.414065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.414107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.414321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.414356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.414498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.414533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.414717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.414752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.414927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.414961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.415136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.415171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.415340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.415374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.415585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.415619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.415774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.415808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.415985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.416019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.416189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.416223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.416403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.416450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.416669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.416702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.416892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.416927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.417068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.417101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.417304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.417343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.417513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.417549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.417813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.417846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.418099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.418134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.418305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.418339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.418530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.418567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.418737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.418771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.418985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.419019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.419230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.419264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.419442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.419497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.419693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.419728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.419930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.419963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.420217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.420252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.420484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.420526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.420777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.420812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.421038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.421072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.421249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.421291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.421469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.421504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.421698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.421741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.421947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.421980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.422200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.422235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.876 [2024-07-24 19:21:54.422398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.876 [2024-07-24 19:21:54.422439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.876 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.422645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.422681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.422908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.422943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.423125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.423165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.423377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.423411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.423592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.423626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.423837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.423871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.424087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.424121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.424323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.424357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.424556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.424591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.424791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.424825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.425002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.425037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.425211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.425245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.425447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.425498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.425683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.425718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.426007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.426043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.426240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.426273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.426421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.426464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.426666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.426700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.426916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.426951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.427181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.427215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.427404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.427449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.427654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.427687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.427885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.427922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.428138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.428172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.428326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.428358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.428550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.428584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.428733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.428767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.428963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.429002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.429227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.429264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.429461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.429494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.429694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.429727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.429881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.429914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.430116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.430149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.430361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.430394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.430601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.430633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.877 [2024-07-24 19:21:54.430809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.877 [2024-07-24 19:21:54.430843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.877 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.431022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.431055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.431235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.431267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.431438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.431472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.431672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.431707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.431883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.431916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.432094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.432127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.432326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.432359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.432564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.432598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.432770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.432802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.432981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.433017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.433238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.433274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.433477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.433510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.433650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.433683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.433874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.433907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.434120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.434154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.434344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.434377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.434531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.434565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.434722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.434755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.434955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.435006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.435237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.435276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.435472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.435508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.435711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.435752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.435923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.435958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.436105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.436139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.436324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.436359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.436517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.436552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.436725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.436759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.436933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.436966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.437151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.437185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.878 [2024-07-24 19:21:54.437355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.878 [2024-07-24 19:21:54.437392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.878 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.437574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.437608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.437779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.437817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.438038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.438072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.438249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.438282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.438472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.438507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.438700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.438732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.438902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.438935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.439134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.439171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.439378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.439411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.439652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.439685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.439859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.439892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.440046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.440079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.440257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.440291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.440454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.440488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.440667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.440701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.440879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.440913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.441098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.441132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.441361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.441395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.441568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.441603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.441753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.441787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.441950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.441990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.442133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.442167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.442307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.442340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.442492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.442526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.442708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.442741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.442917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.442951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.443114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.443148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.443321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.443355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.443523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.443562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.443735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.443770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.443946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.443981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.444160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.444193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.444323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.444356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.444522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.444556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.444764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.444800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.879 [2024-07-24 19:21:54.444983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.879 [2024-07-24 19:21:54.445017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.879 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.445215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.445249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.445486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.445520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.445746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.445783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.446002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.446036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.446223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.446257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.446446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.446490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.446683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.446716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.446875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.446910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.447119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.447156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.447331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.447364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.447538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.447571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.447771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.447804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.448012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.448046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.448220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.448254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.448458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.448492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.448661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.448694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.448858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.448891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.449062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.449097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.449271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.449305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.449483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.449518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.449692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.449725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.449899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.449932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.450097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.450131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.450316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.450353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.450556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.450590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.450754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.450787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.450965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.450998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.451199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.451233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.451405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.451451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.451627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.451661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.451848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.451881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.452077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.452110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.452279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.452320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.452504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.452539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.452732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.452768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.453003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.453036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.453255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.880 [2024-07-24 19:21:54.453289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.880 qpair failed and we were unable to recover it. 00:29:48.880 [2024-07-24 19:21:54.453548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.453585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.453725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.453761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.453933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.453966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.454144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.454176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.454347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.454380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.454560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.454597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.454773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.454808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.454982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.455016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.455181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.455215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.455361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.455394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.455605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.455642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.455848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.455881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.456057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.456090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.456264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.456297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.456444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.456478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.456651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.456685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.456854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.456888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.457089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.457125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.457347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.457381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.457574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.457607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.457812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.457849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.458035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.458069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.458248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.458281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.458444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.458478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.458642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.458675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.458800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.458834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.459047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.459081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.459287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.459320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.459471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.459504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.459652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.459684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.459885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.459919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.460106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.460140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.460318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.460351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.460532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.460568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.460789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.881 [2024-07-24 19:21:54.460822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.881 qpair failed and we were unable to recover it. 00:29:48.881 [2024-07-24 19:21:54.461009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.461048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.461278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.461315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.461513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.461548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.461756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.461789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.461996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.462030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.462206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.462240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.462385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.462417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.462605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.462638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.462825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.462858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.463045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.463085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.463246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.463280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.463493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.463527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.463737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.463771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.463932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.463965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.464177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.464210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.464464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.464502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.464670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.464707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.464881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.464914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.465062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.465095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.465299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.465332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.465544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.465579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.465776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.465811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.466011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.466044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.466189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.466222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.466391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.466435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.466651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.466685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.466816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.466848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.467034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.467067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.467234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.467267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.467459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.467493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.467725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.467762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.467930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.467964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.468128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.468161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.468325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.468358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.468546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.468579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.468720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.468757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.468897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.468930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.469092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.882 [2024-07-24 19:21:54.469126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.882 qpair failed and we were unable to recover it. 00:29:48.882 [2024-07-24 19:21:54.469282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.469315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.469479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.469513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.469673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.469714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.469873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.469907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.470077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.470110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.470232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.470266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.470450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.470484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.470611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.470644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.470778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.470812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.470976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.471010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.471176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.471207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.471369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.471403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.471607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.471640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.471801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.471834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.472020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.472056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.472244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.472277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.883 [2024-07-24 19:21:54.472443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.883 [2024-07-24 19:21:54.472477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.883 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.472635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.472669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.472794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.472827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.472965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.472998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.473151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.473186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.473380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.473414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.473572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.473604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.473763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.473797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.473945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.473978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.474114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.474147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.474301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.474338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.474506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.474542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.474730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.474764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.474899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.474932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.475106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.475139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.475262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.475296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.475466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.475503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.475671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.475705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.475892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.475925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.476074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.476107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.476292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.476324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.476492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.476526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.476661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.476694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.476827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.476860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.477016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.477049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.477205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.884 [2024-07-24 19:21:54.477238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.884 qpair failed and we were unable to recover it. 00:29:48.884 [2024-07-24 19:21:54.477364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.477403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.477599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.477633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.477820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.477854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.477993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.478024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.478206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.478239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.478445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.478479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.478654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.478688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.478861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.478906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.479113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.479146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.479282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.479315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.479517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.479549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.479739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.479775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.479949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.479985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.480160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.480193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.480379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.480413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.480587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.480620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.480831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.480865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.481047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.481080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.481298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.481331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.481486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.481519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.481677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.481710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.481913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.481951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.482180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.482214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.482378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.482411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.482620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.482654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.482800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.482833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.483006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.483040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.483258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.483291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.483475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.483509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.483684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.483717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.483854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.483886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.484050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.484083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.484214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.484245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.484447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.484481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.484689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.484722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.484892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.484926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.485122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.885 [2024-07-24 19:21:54.485155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.885 qpair failed and we were unable to recover it. 00:29:48.885 [2024-07-24 19:21:54.485335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.485367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.485577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.485611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.485807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.485841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.486039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.486078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.486313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.486347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.486516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.486555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.486719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.486752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.486907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.486940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.487097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.487130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.487260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.487293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.487478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.487511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.487715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.487748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.487891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.487924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.488123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.488156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.488395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.488434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.488661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.488695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.488868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.488904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.489107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.489141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.489313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.489346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.489553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.489588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.489808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.489841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.490031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.490063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.490207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.490240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.490368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.490400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.490601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.490636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.490812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.490851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.491019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.491052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.491225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.491258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.491459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.491492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.491663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.491697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.491922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.491956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.492159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.492192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.492383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.492415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.492659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.492692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.492863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.492896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.493077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.886 qpair failed and we were unable to recover it. 00:29:48.886 [2024-07-24 19:21:54.493299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.886 [2024-07-24 19:21:54.493332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.493534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.493568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.493766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.493799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.493972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.494006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.494209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.494242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.494489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.494522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.494691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.494728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.494863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.494901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.495061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.495094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.495275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.495308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.495442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.495476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.495629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.495662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.495799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.495832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.495969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.496003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.496201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.496234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.496442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.496475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.496620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.496653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.496782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.496815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.497009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.497042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.497242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.497275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.497441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.497475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.497657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.497690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.497884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.497917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.498138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.498171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.498388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.498421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.498631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.498663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.498869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.498903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.499106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.499139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.499305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.499338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.499538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.499572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.499713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.499747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.499935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.499968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.500150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.500183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.500350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.500383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.500577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.500610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.500783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.500816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.887 [2024-07-24 19:21:54.500985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.887 [2024-07-24 19:21:54.501018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.887 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.501192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.501225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.501439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.501483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.501619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.501653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.501844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.501876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.502061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.502094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.502279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.502312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.502536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.502570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.502735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.502776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.502914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.502946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.503145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.503178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.503376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.503415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.503597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.503630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.503797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.503830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.504011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.504044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.504247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.504279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.504477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.504511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.504677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.504711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.504883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.504916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.505115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.505148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.505317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.505350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.505533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.505566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.505731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.505764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.505942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.505976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.506152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.506185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.506358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.506391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.506539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.506572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.506772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.506805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.506942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.506975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.507140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.507181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.507357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.507390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.507575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.507608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.507818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.507851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.508047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.508080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.888 qpair failed and we were unable to recover it. 00:29:48.888 [2024-07-24 19:21:54.508256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.888 [2024-07-24 19:21:54.508289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.508472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.508505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.508693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.508726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.508890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.508923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.509100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.509133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.509308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.509341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.509512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.509545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.509754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.509786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.510034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.510067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.510281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.510314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.510516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.510550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.510753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.510786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.511003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.511036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.511212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.511245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.511435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.511475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.511633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.511665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.511881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.511913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.512074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.512120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.512288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.512322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.512532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.512566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.512758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.512791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.512941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.512974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.513142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.513175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.513319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.513353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.513514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.513547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.513718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.513751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.513889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.513922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.514133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.514166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.514349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.514382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.514599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.514632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.514813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.514846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.515030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.515063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.515207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.889 [2024-07-24 19:21:54.515240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.889 qpair failed and we were unable to recover it. 00:29:48.889 [2024-07-24 19:21:54.515458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.515492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.515691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.515723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.515868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.515901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.516076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.516109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.516272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.516305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.516487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.516521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.516678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.516711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.516848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.516881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.517059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.517093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.517273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.517306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.517517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.517551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.517770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.517804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.518004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.518037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.518218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.518251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.518475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.518508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.518681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.518715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.518852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.518887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.519069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.519102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.519228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.519261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.519442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.519475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.519644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.519686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.519857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.519890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.520062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.520095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.520278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.520311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.520548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.520587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.520802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.520836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.521042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.521075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.521226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.521259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.521439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.521472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.521640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.521690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.521927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.521966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.522143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.522177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.522373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.522406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.522605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.522638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.522776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.522820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.523008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.523042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.890 qpair failed and we were unable to recover it. 00:29:48.890 [2024-07-24 19:21:54.523226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.890 [2024-07-24 19:21:54.523259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.523403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.523443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.523659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.523708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.523911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.523945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.524129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.524162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.524325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.524358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.524556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.524590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.524742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.524775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.524933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.524967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.525173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.525206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:48.891 [2024-07-24 19:21:54.525365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.891 [2024-07-24 19:21:54.525398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:48.891 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.525618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.525680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.525949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.526004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.526255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.526314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.526572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.526621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.526814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.526850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.527050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.527088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.527311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.527346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.527515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.527550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.527739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.527784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.527959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.527994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.528175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.528209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.528435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.528474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.528631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.528664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.528842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.528874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.529018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.529051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.529212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.529245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.529446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.529480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.529694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.529733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.529928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.529961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.530171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.530204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.530365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.530398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.530589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.530630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.530846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.530879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.531047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.531080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.531282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.531316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.531486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.531521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.167 qpair failed and we were unable to recover it. 00:29:49.167 [2024-07-24 19:21:54.531687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.167 [2024-07-24 19:21:54.531720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.531932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.531972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.532145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.532178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.532350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.532384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.532566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.532599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.532782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.532816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.532959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.532992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.533156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.533189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.533387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.533421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.533603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.533636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.533812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.533844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.534017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.534051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.534179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.534212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.534408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.534447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.534603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.534637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.534800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.534832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.535016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.535048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.535197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.535230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.535416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.535456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.535634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.535667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.535839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.535872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.536040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.536072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.536269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.536302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.536473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.536508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.536694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.536727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.536902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.536936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.537108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.537141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.537327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.537360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.537522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.537556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.537754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.537787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.537918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.537951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.538132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.538170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.538383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.538416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.538558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.538591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.538787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.538821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.538955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.538988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.539197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.539229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.168 [2024-07-24 19:21:54.539403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.168 [2024-07-24 19:21:54.539442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.168 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.539611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.539645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.539811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.539843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.540014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.540047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.540225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.540259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.540421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.540461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.540625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.540658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.540833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.540866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.541058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.541090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.541260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.541293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.541491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.541525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.541700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.541733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.541906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.541939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.542079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.542118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.542276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.542316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.542460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.542494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.542664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.542697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.542896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.542929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.543090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.543123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.543252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.543285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.543494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.543528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.543729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.543762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.543927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.543960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.544131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.544164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.544350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.544383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.544551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.544585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.544786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.544819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.544968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.545001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.545203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.545236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.545374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.545407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.545622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.545656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.545854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.545887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.546090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.546123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.546264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.546297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.546482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.546522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.546697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.546730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.546902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.546934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.547147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.169 [2024-07-24 19:21:54.547181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.169 qpair failed and we were unable to recover it. 00:29:49.169 [2024-07-24 19:21:54.547378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.547412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.547592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.547626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.547796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.547829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.548028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.548062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.548245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.548278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.548469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.548512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.548713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.548745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.548919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.548951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.549129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.549163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.549343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.549376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.549539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.549572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.549775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.549808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.549990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.550023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.550234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.550267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.550437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.550471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.550618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.550651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.550789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.550822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.550999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.551032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.551231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.551264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.551444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.551478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.551674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.551707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.551876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.551910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.552106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.552139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.552317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.552350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.552529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.552563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.552745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.552977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.553010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.553156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.553190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.553357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.553390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.553581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.553615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.553759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.553792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.554013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.554047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.554213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.554246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.554414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.554454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.554655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.554688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.170 qpair failed and we were unable to recover it. 00:29:49.170 [2024-07-24 19:21:54.554890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.170 [2024-07-24 19:21:54.554923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.555124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.555157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.555342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.555375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.555558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.555591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.555763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.555796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.555972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.556005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.556191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.556224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.556438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.556472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.556673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.556706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.556915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.556948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.557129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.557162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.557370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.557402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.557598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.557631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.557808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.557841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.558013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.558046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.558238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.558271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.558442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.558475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.558688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.558721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.558874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.558908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.559074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.559107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.559285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.559318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.559479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.559513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.559680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.559712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.559880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.559913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.560112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.560145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.560291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.560324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.560516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.560550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.560746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.560779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.560942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.560980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.561149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.561182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.561353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.561386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.561577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.561610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.171 [2024-07-24 19:21:54.561797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.171 [2024-07-24 19:21:54.561830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.171 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.562040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.562074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.562246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.562279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.562454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.562488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.562667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.562699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.562889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.562922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.563092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.563125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.563332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.563365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.563550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.563592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.563767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.563801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.563981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.564014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.564213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.564246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.564418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.564457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.564656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.564690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.564866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.564900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.565073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.565105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.565274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.565308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.565481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.565515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.565690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.565722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.565891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.565924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.566124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.566157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.566350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.566383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.566590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.566623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.566811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.566844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.567010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.567043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.567253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.567287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.567457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.567495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.567692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.567726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.567933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.567966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.568126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.568159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.568337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.568370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.568580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.568613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.568798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.568831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.568981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.569014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.172 qpair failed and we were unable to recover it. 00:29:49.172 [2024-07-24 19:21:54.569190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.172 [2024-07-24 19:21:54.569222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.569423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.569463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.569637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.569675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.569843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.569875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.570053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.570086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.570259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.570292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.570422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.570463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.570672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.570705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.570900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.570933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.571145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.571178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.571337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.571369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.571589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.571623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.571785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.571818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.571998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.572031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.572206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.572239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.572460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.572493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.572635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.572668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.572883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.572917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.573115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.573148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.573317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.573350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.573532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.573566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.573736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.573769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.573942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.573976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.574117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.574149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.574321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.574354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.574556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.574590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.574792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.574826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.574988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.575020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.575192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.575225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.575403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.575443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.575580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.575613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.575788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.575821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.575982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.576016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.576177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.576210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.576356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.576389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.576576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.576610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.173 qpair failed and we were unable to recover it. 00:29:49.173 [2024-07-24 19:21:54.576750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.173 [2024-07-24 19:21:54.576783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.576955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.576988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.577160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.577193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.577354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.577387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.577593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.577627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.577825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.577858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.578032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.578070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.578250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.578283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.578464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.578497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.578664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.578697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.578873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.578906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.579077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.579110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.579278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.579311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.579487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.579521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.579695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.579728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.579936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.579969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.580103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.580136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.580337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.580370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.580556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.580589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.580753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.580786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.581005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.581039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.581206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.581245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.581448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.581483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.581666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.581698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.581877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.581910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.582073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.582107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.582282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.582315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.582467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.582501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.582652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.582685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.582854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.582887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.583073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.583106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.583270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.583303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.583492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.583525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.583666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.583699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.583900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.583933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.584091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.584124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.584283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.584316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.174 [2024-07-24 19:21:54.584534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.174 [2024-07-24 19:21:54.584568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.174 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.584731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.584764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.584935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.584968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.585140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.585173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.585334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.585367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.585567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.585600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.585820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.585853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.586011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.586044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.586241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.586274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.586477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.586516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.586705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.586739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.586940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.586974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.587149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.587182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.587387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.587420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.587641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.587674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.587867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.587900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.588099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.588132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.588342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.588375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.588553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.588587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.588777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.588810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.589015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.589048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.589211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.589244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.589443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.589476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.589661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.589709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.589903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.589938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.590131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.590164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.590341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.590374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.590584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.590617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.590812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.590845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.591064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.591097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.591298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.591331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.591494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.591527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.591696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.591730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.591872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.591905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.175 [2024-07-24 19:21:54.592072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.175 [2024-07-24 19:21:54.592105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.175 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.592297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.592330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.592528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.592562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.592739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.592773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.592973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.593006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.593179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.593212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.593404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.593444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.593612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.593645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.593813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.593845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.594042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.594075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.594211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.594244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.594394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.594426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.594627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.594660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.594854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.594887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.595092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.595125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.595310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.595348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.595517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.595551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.595693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.595726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.595903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.595936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.596087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.596120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.596304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.596337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.596545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.596580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.596769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.596802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.596941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.596974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.597118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.597156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.597343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.597375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.597554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.597588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.597752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.597785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.597947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.597980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.176 [2024-07-24 19:21:54.598155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.176 [2024-07-24 19:21:54.598188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.176 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.598355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.598388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.598572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.598605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.598768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.598801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.598961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.598994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.599162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.599195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.599355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.599388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.599590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.599624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.599790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.599823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.599966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.600000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.600188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.600221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.600438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.600473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.600626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.600659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.600834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.600867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.601055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.601088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.601277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.601310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.601497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.601530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.601719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.601751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.601943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.601976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.602185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.602218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.602382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.602415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.602567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.602600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.602752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.602785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.602950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.602983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.603174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.603207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.603413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.603452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.603641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.603680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.603884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.603917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.604057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.604090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.604290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.604324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.604513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.604548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.604690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.604723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.604883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.604915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.605091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.605125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.605318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.605350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.605508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.605542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.605742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.177 [2024-07-24 19:21:54.605775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.177 qpair failed and we were unable to recover it. 00:29:49.177 [2024-07-24 19:21:54.605954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.605987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.606151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.606183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.606317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.606349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.606511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.606545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.606725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.606758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.606975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.607008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.607146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.607179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.607339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.607372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.607562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.607595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.607787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.607820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.608011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.608044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.608244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.608277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.608445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.608479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.608641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.608674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.608876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.608909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.609075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.609108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.609291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.609323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.609516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.609549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.609759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.609793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.609929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.609962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.610158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.610191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.610399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.610439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.610633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.610665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.610844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.610876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.611067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.611100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.611267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.611300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.611507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.611540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.611715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.611748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.611941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.611973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.612179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.612217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.612408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.612448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.612605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.612638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.612806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.612839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.613030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.613062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.613256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.613290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.178 qpair failed and we were unable to recover it. 00:29:49.178 [2024-07-24 19:21:54.613422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.178 [2024-07-24 19:21:54.613464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.613656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.613688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.613890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.613923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.614091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.614124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.614286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.614319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.614509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.614543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.614745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.614778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.614948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.614981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.615165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.615198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.615399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.615450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.615654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.615687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.615885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.615918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.616091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.616124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.616288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.616321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.616523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.616557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.616714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.616748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.616943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.616976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.617140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.617173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.617339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.617371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.617567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.617600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.617803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.617835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.618036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.618069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.618237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.618270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.618463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.618496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.618692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.618725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.618894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.618927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.619118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.619151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.619308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.619341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.619500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.619533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.619698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.619731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.619897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.619930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.620094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.620126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.620263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.620295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.620456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.620489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.620655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.620694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.620849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.179 [2024-07-24 19:21:54.620882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.179 qpair failed and we were unable to recover it. 00:29:49.179 [2024-07-24 19:21:54.621040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.621072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.621237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.621270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.621396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.621435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.621629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.621662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.621828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.621860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.622028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.622061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.622243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.622276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.622442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.622475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.622665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.622698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.622894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.622926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.623117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.623149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.623284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.623317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.623515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.623549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.623717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.623750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.623941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.623974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.624167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.624200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.624393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.624425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.624627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.624660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.624855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.624888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.625051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.625084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.625261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.625293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.625472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.625505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.625703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.625736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.625910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.625943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.626130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.626163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.626337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.626370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.626569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.626603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.626770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.626804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.626995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.627028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.627218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.627252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.627387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.627421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.627601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.627635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.627835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.627869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.628048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.628081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.628236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.628270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.180 [2024-07-24 19:21:54.628434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.180 [2024-07-24 19:21:54.628468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.180 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.628666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.628700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.628912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.628946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.629133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.629172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.629362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.629395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.629596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.629632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.629839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.629873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.630060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.630093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.630291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.630325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.630482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.630516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.630701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.630745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.630915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.630951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.631145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.631178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.631373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.631406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.631638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.631671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.631823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.631857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.632030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.632064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.632237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.632271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.632444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.632478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.632644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.632677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.632871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.632904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.633102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.633138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.633332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.633365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.633538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.633572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.633738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.633770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.633974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.634008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.634170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.634203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.634408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.634449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.634664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.634698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.634863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.181 [2024-07-24 19:21:54.634896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.181 qpair failed and we were unable to recover it. 00:29:49.181 [2024-07-24 19:21:54.635094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.635131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.635294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.635330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.635506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.635542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.635736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.635770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.635938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.635971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.636160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.636193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.636382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.636415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.636600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.636633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.636834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.636866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.637031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.637064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.637259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.637292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.637468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.637504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.637670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.637703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.637910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.637948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.638128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.638160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.638348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.638381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.638528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.638572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.638782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.638818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.639008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.639051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.639264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.639297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.639470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.639504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.639702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.639739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.639884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.639915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.640108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.640140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.640299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.640331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.640498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.640532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.640706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.640743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.640965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.641177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.641209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.641408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.641447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.641653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.641687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.641816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.641849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.641995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.642029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.642230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.642264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.642472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.642505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.642746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.182 [2024-07-24 19:21:54.642780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.182 qpair failed and we were unable to recover it. 00:29:49.182 [2024-07-24 19:21:54.642953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.642985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.643205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.643239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.643448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.643482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.643623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.643655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.643806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.643840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.644039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.644075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.644247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.644280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.644442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.644474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.644673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.644707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.644888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.644921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.645114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.645149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.645361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.645396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.645632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.645669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.645820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.645853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.646032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.646065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.646264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.646300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.646504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.646537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.646729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.646767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.646910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.646942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.647195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.647229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.647413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.647455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.647603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.647638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.647809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.647842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.648011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.648043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.648211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.648244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.648396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.648437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.648648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.648682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.648949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.648983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.649184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.649218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.649421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.649472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.649676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.649710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.649902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.649937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.650139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.650175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.650362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.650395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.650550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.650587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.650815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.650868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.651141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.651178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.183 [2024-07-24 19:21:54.651398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.183 [2024-07-24 19:21:54.651445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.183 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.651630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.651666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.651893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.651928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.652100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.652137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.652336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.652371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.652585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.652621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.652798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.652833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.653042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.653076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.653248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.653284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.653486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.653522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.653736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.653778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.653993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.654029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.654218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.654252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.654460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.654511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.654724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.654758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.654934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.654968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.655222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.655255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.655506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.655540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.655737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.655770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.655939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.655973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.656150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.656191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.656363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.656396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.656580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.656612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.656807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.656841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.657052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.657086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.657262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.657296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.657466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.657500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.657643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.657675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.657851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.657883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.658057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.658094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.658308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.658344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.658539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.658574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.658738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.658777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.658974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.659007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.659196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.659237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.659454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.659490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.184 [2024-07-24 19:21:54.659689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.184 [2024-07-24 19:21:54.659722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.184 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.659890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.659922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.660105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.660138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.660338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.660373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.660587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.660624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.660828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.660862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.661075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.661108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.661312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.661346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.661518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.661552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.661744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.661776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.662019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.662053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.662254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.662286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.662486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.662519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.662746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.662779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.662993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.663027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.663287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.663320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.663476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.663510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.663717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.663750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.663895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.663928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.664129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.664164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.664340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.664374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.664572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.664607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.664759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.664791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.664991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.665024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.665278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.665315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.665512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.665549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.665786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.665829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.665989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.666032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.666210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.666243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.666414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.666455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.666656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.666689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.666886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.666919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.667090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.667124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.667251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.667285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.667444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.667477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.667705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.667739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.667933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.185 [2024-07-24 19:21:54.667966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.185 qpair failed and we were unable to recover it. 00:29:49.185 [2024-07-24 19:21:54.668143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.668177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.668439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.668473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.668676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.668710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.668888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.668922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.669097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.669130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.669277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.669309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.669498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.669530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.669743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.669780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.669968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.670005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.670200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.670234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.670446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.670480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.670621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.670654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.670813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.670849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.671042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.671076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.671279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.671317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.671537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.671571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.671697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.671729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.671900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.671937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.672109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.672142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.672270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.672302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.672455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.672489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.672690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.672723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.672892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.672924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.673133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.673165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.673360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.673392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.673605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.673638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.673851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.673884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.674048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.674082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.674335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.674370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.674598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.674632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.674865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.186 [2024-07-24 19:21:54.674898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.186 qpair failed and we were unable to recover it. 00:29:49.186 [2024-07-24 19:21:54.675054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.675087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.675289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.675322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.675492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.675525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.675691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.675723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.675902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.675935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.676112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.676145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.676266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.676299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.676503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.676536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.676740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.676785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.677000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.677032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.677235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.677267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.677491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.677526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.677724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.677758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.677968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.678002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.678199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.678232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.678452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.678488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.678668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.678702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.678901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.678934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.679133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.679165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.679343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.679375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.679585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.679626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.679851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.679884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.680089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.680122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.680314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.680352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.680579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.680615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.680779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.680813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.680974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.681008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.681182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.681215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.681409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.681448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.681616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.681653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.681839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.681873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.682056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.682092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.682223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.682255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.682432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.682465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.682646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.682679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.682854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.682890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.187 qpair failed and we were unable to recover it. 00:29:49.187 [2024-07-24 19:21:54.683054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.187 [2024-07-24 19:21:54.683087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.683273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.683320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.683479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.683513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.683711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.683744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.683917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.683950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.684122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.684154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.684350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.684384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.684589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.684622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.684855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.684888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.685085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.685119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.685293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.685328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.685468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.685502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.685664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.685707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.685890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.685923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.686079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.686113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.686286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.686321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.686516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.686549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.686749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.686780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.686978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.687010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.687181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.687218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.687414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.687468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.687637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.687670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.687829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.687860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.688043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.688076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.688251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.688286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.688474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.688508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.688684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.688720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.688860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.688901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.689089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.689121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.689296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.689332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.689549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.689583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.689812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.689846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.690004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.690037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.690213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.690247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.690410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.690454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.690622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.690674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.690915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.690956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.188 [2024-07-24 19:21:54.691178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.188 [2024-07-24 19:21:54.691213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.188 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.691416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.691464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.691651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.691694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.691859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.691893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.692124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.692159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.692350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.692388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.692572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.692607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.692832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.692867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.693051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.693093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.693322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.693354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.693550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.693584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.693826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.693860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.694002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.694036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.694201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.694242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.694426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.694464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.694627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.694670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.694888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.694925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.695140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.695174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.695387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.695422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.695598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.695632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.695810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.695844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.696112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.696147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.696364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.696398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.696560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.696595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.696758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.696792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.696990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.697025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.697224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.697258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.697487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.697532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:49.189 [2024-07-24 19:21:54.697742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.697777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:49.189 [2024-07-24 19:21:54.697990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.698039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.189 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:49.189 [2024-07-24 19:21:54.698262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.698297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.189 [2024-07-24 19:21:54.698521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.698558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.698707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.698741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.698924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.698965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.699187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.699221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.699476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.699519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.189 [2024-07-24 19:21:54.699751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.189 [2024-07-24 19:21:54.699785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.189 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.699997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.700037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.700301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.700336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.700494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.700529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.700740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.700794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.701119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.701156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.701441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.701487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.701655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.701689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.701893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.701932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.702146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.702180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.702512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.702547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.702682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.702718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.702928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.702962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.703166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.703200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.703418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.703466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.703699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.703735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.703921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.703954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.704221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.704254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.704490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.704525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.704701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.704735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.704934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.704968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.705138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.705171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.705377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.705414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.705591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.705626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.705847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.705881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.706053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.706088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.706258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.706292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.706475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.706512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.706675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.706727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.706925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.706962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.707215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.707251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.707443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.707480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.707665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.707708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.707911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.707945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.708138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.708173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.708422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.708473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.190 qpair failed and we were unable to recover it. 00:29:49.190 [2024-07-24 19:21:54.708652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.190 [2024-07-24 19:21:54.708688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.708889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.708924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.709111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.709147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.709374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.709409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.709574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.709608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.709786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.709822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.710024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.710058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.710266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.710301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.710498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.710534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.710685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.710719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.710905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.710947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.711099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.711133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.711335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.711370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.711532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.711568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.711750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.711784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.711996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.712030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.712231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.712265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.712471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.712507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.712661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.712694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.712937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.712971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.713123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.713161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.713370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.713403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e08000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.713608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.713659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.713860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.713896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.714040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.714078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.714252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.714285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.714450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.714482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.714654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.714687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.714906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.714944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.715169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.715203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.715476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.191 [2024-07-24 19:21:54.715511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.191 qpair failed and we were unable to recover it. 00:29:49.191 [2024-07-24 19:21:54.715662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.715696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.715877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.715912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.716129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.716162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.716381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.716415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.716587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.716621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.716797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.716836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.717015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.717048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.717212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.717245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.717423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.717467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.717648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.717682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.717881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.717914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.718131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.718164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.718339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.718373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.718530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.718562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.718712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.718746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.718923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.718957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.719143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.719175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.719372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.719404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.719578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.719611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.719798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.719839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.720056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.720090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.720300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.720334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.720500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.720534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.720696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.720728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.720890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.720923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.721135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.721169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.192 [2024-07-24 19:21:54.721342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.721376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.192 [2024-07-24 19:21:54.721558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.721595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.192 [2024-07-24 19:21:54.721775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.192 [2024-07-24 19:21:54.721809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.722026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.722060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.722270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.722306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.722486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.722520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.722662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.722695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.722859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.722892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.723063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.723096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.192 [2024-07-24 19:21:54.723331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.192 [2024-07-24 19:21:54.723367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.192 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.723538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.723573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.723790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.723823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.724014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.724046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.724244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.724277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.724459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.724493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.724640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.724672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.724881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.724914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.725125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.725167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.725381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.725415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.725577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.725611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.725775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.725813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.726023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.726056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.726273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.726309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.726489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.726521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.726681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.726714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.726920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.726953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.727148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.727182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.727333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.727366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.727537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.727570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.727706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.727738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.727927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.727961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.728128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.728161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.728321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.728353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.728518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.728551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.728732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.728765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.728967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.729000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.729200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.729237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.729423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.729462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.729646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.729679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.729885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.729922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.730164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.730197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.730373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.730406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.730574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.730606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.730790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.730825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.730966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.731002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.731200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.193 [2024-07-24 19:21:54.731233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.193 qpair failed and we were unable to recover it. 00:29:49.193 [2024-07-24 19:21:54.731464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.731499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.731689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.731723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.731919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.731961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.732189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.732223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.732360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.732392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.732579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.732615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.732818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.732855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.733040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.733073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.733279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.733312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.733490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.733526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.733704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.733737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.733938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.733975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.734152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.734184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.734328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.734360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.734546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.734583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.734771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.734804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.735009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.735045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.735235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.735266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.735470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.735504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.735719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.735761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.735979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.736010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.736198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.736230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.736396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.736438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.736601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.736634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.736839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.736885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.737073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.737107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.737287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.737320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.737530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.737563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.737746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.737778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.737964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.737998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.738210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.738247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.738418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.738458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.738665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.738699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.738875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.738908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.739142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.739175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.739347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.194 [2024-07-24 19:21:54.739379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.194 qpair failed and we were unable to recover it. 00:29:49.194 [2024-07-24 19:21:54.739575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.739608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.739815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.739847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.739987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.740020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.740188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.740225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.740426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.740484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.740635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.740667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.740869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.740907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.741084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.741117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.741301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.741334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.741475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.741509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.741715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.741747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.741907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.741942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.742097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.742131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.742318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.742350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.742558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.742594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.742751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.742789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.742992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.743024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.743182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.743217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.743406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.743446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.743612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.743646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.743914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.743948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.744158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.744191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.744387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.744420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.744582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.744615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.744793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.744825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.745040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.745073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.745276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.745308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.745527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.745563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.745775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.745808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.745996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.746032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.746241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.746275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.746461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.746495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.746659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.746698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.746908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.746941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.747153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.747185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.747410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.195 qpair failed and we were unable to recover it. 00:29:49.195 [2024-07-24 19:21:54.747636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.195 [2024-07-24 19:21:54.747670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.747849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.747882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.748056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.748088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.748289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.748322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.748511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.748544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.748713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.748744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.748939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.748973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.749159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.749191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.749378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.749411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.749585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.749619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.749831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.749863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 Malloc0 00:29:49.196 [2024-07-24 19:21:54.750060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.750092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.750317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.750350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.196 [2024-07-24 19:21:54.750532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.750566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:49.196 [2024-07-24 19:21:54.750765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.750798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.196 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.196 [2024-07-24 19:21:54.751063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.751097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.751296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.751330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.751504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.751537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.751677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.751710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.751914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.751947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.752123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.752156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.752361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.752394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.752584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.752618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.752794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.752827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.753047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.753079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.753291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.753324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.753501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.753533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.753676] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.196 [2024-07-24 19:21:54.753752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.753784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.753985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.754016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.754186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.754217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.196 [2024-07-24 19:21:54.754386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.196 [2024-07-24 19:21:54.754420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.196 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.754646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.754680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.754859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.754892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.755086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.755119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.755284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.755316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.755514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.755548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.755753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.755787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.755996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.756028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.756237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.756268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.756446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.756478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.756653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.756686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.756860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.756893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.757071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.757105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.757282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.757314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.757535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.757567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.757723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.757755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.757960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.757993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.758174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.758207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.758340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.758373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.758597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.758631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.758794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.758828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.759072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.759106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.759319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.759351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.759563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.759595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.759757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.759801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.759962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.759995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.760183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.760216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.760416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.760475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.760675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.760709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.760925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.760957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.761140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.761173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.761375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.761407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.761644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.761676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.761825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.761858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.762106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 [2024-07-24 19:21:54.762138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.197 [2024-07-24 19:21:54.762315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.197 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.197 [2024-07-24 19:21:54.762347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.197 qpair failed and we were unable to recover it. 00:29:49.198 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.198 [2024-07-24 19:21:54.762622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.198 [2024-07-24 19:21:54.762655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.762865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.762898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.763079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.763112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.763329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.763362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.763596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.763630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.763817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.763850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.764054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.764088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.764292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.764326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.764528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.764562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.764745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.764778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.764976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.765008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.765187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.765219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.765402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.765444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.765646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.765679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.765874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.765908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.766163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.766196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.766373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.766406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.766627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.766660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.766823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.766855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.767057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.767089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.767299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.767331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.767586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.767620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.767767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.767800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.768006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.768039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.768241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.768274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.768482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.768515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.768710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.768743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.768947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.768979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.769176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.769209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.769415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.769460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.769675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.769709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.769845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.769878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 [2024-07-24 19:21:54.770044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 [2024-07-24 19:21:54.770077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.198 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.198 [2024-07-24 19:21:54.770293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.198 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.198 [2024-07-24 19:21:54.770326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.198 qpair failed and we were unable to recover it. 00:29:49.199 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.199 [2024-07-24 19:21:54.770498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.770531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.199 [2024-07-24 19:21:54.770729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.770761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.770937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.770969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.771190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.771223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.771406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.771446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.771701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.771735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.771933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.771966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.772174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.772207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.772387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.772420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.772605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.772638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.772870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.772903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.773128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.773161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.773349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.773382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.773602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.773634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.773839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.773873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.774071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.774111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.774266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.774298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.774499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.774533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.774739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.774771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.774967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.775001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.775180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.775219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.775361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.775393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.775610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.775643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.775841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.775875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.776039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.776071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.776266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.776298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.776499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.776533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.776769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.776802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.777004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.777036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.777263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.777296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.777497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.777530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.777780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.777812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 [2024-07-24 19:21:54.777988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.778021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.199 [2024-07-24 19:21:54.778178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.778216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.199 [2024-07-24 19:21:54.778392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.199 [2024-07-24 19:21:54.778424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.199 qpair failed and we were unable to recover it. 00:29:49.199 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.199 [2024-07-24 19:21:54.778642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.200 [2024-07-24 19:21:54.778675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.778875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.778907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.779072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.779104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.779282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.779315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.779486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.779519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.779657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.779690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.779905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.779938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.780070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.780103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.780282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.780314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.780497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.780530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.780692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.780723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.780924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.780958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.781198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.781231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.781405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.781443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.781705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.781737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.781897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.200 [2024-07-24 19:21:54.781931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5e18000b90 with addr=10.0.0.2, port=4420 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.782086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.200 [2024-07-24 19:21:54.784610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.200 [2024-07-24 19:21:54.784778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.200 [2024-07-24 19:21:54.784813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.200 [2024-07-24 19:21:54.784832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.200 [2024-07-24 19:21:54.784847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e18000b90 00:29:49.200 [2024-07-24 19:21:54.784891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.200 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.200 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:49.200 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:49.200 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:49.200 [2024-07-24 19:21:54.794405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.200 19:21:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1777275 00:29:49.200 [2024-07-24 19:21:54.794561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.200 [2024-07-24 19:21:54.794596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.200 [2024-07-24 19:21:54.794622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.200 [2024-07-24 19:21:54.794638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e18000b90 00:29:49.200 [2024-07-24 19:21:54.794676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.804452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.200 [2024-07-24 19:21:54.804593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.200 [2024-07-24 19:21:54.804628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.200 [2024-07-24 19:21:54.804647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.200 [2024-07-24 19:21:54.804663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e18000b90 00:29:49.200 [2024-07-24 19:21:54.804700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.814391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.200 [2024-07-24 19:21:54.814562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.200 [2024-07-24 19:21:54.814597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.200 [2024-07-24 19:21:54.814616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.200 [2024-07-24 19:21:54.814631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e18000b90 00:29:49.200 [2024-07-24 19:21:54.814670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:49.200 qpair failed and we were unable to recover it. 00:29:49.200 [2024-07-24 19:21:54.824435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.200 [2024-07-24 19:21:54.824580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.200 [2024-07-24 19:21:54.824625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.201 [2024-07-24 19:21:54.824646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.201 [2024-07-24 19:21:54.824662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.201 [2024-07-24 19:21:54.824703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.201 qpair failed and we were unable to recover it. 00:29:49.201 [2024-07-24 19:21:54.834460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.201 [2024-07-24 19:21:54.834607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.201 [2024-07-24 19:21:54.834643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.201 [2024-07-24 19:21:54.834663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.201 [2024-07-24 19:21:54.834678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.201 [2024-07-24 19:21:54.834718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.201 qpair failed and we were unable to recover it. 00:29:49.201 [2024-07-24 19:21:54.844805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.201 [2024-07-24 19:21:54.844958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.201 [2024-07-24 19:21:54.844994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.201 [2024-07-24 19:21:54.845013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.201 [2024-07-24 19:21:54.845029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.201 [2024-07-24 19:21:54.845067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.201 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.854538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.854709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.854743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.854762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.459 [2024-07-24 19:21:54.854777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.459 [2024-07-24 19:21:54.854815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.459 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.864578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.864721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.864757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.864776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.459 [2024-07-24 19:21:54.864791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.459 [2024-07-24 19:21:54.864829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.459 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.874576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.874717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.874756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.874774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.459 [2024-07-24 19:21:54.874790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.459 [2024-07-24 19:21:54.874828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.459 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.884547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.884684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.884733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.884752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.459 [2024-07-24 19:21:54.884768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.459 [2024-07-24 19:21:54.884806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.459 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.894602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.894753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.894787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.894806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.459 [2024-07-24 19:21:54.894821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.459 [2024-07-24 19:21:54.894860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.459 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.904659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.904831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.904866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.904885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.459 [2024-07-24 19:21:54.904901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.459 [2024-07-24 19:21:54.904939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.459 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.914684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.914821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.914855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.914875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.459 [2024-07-24 19:21:54.914890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.459 [2024-07-24 19:21:54.914928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.459 qpair failed and we were unable to recover it. 00:29:49.459 [2024-07-24 19:21:54.924708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.459 [2024-07-24 19:21:54.924850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.459 [2024-07-24 19:21:54.924884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.459 [2024-07-24 19:21:54.924903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.924918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.924963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:54.934710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:54.934851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:54.934886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:54.934904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.934919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.934957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:54.944804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:54.944973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:54.945008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:54.945027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.945042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.945080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:54.954843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:54.955004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:54.955039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:54.955058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.955073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.955112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:54.964861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:54.965003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:54.965037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:54.965055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.965071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.965108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:54.974808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:54.974959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:54.974999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:54.975019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.975034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.975072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:54.984855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:54.984996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:54.985031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:54.985049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.985065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.985102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:54.994921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:54.995054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:54.995088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:54.995107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:54.995122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:54.995160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:55.004940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:55.005076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:55.005109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:55.005128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:55.005143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:55.005181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:55.014911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:55.015056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:55.015090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:55.015108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:55.015131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:55.015170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:55.024945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:55.025098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:55.025131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:55.025150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:55.025165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:55.025202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:55.035005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:55.035161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:55.035195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:55.035213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:55.035229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:55.035267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:55.045008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:55.045148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:55.045182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:55.045200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:55.045215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.460 [2024-07-24 19:21:55.045253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.460 qpair failed and we were unable to recover it. 00:29:49.460 [2024-07-24 19:21:55.055057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.460 [2024-07-24 19:21:55.055202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.460 [2024-07-24 19:21:55.055236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.460 [2024-07-24 19:21:55.055255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.460 [2024-07-24 19:21:55.055269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.055308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.065151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.065294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.065328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.065346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.065361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.065400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.075122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.075259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.075293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.075311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.075326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.075364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.085173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.085325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.085359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.085377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.085392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.085437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.095127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.095271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.095305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.095324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.095339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.095377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.105185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.105327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.105362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.105380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.105403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.105454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.115219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.115400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.115442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.115462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.115478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.115516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.125214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.125354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.125388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.125406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.125421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.125470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.135273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.135417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.135460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.135479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.135494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.135532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.461 [2024-07-24 19:21:55.145269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.461 [2024-07-24 19:21:55.145402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.461 [2024-07-24 19:21:55.145444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.461 [2024-07-24 19:21:55.145465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.461 [2024-07-24 19:21:55.145480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.461 [2024-07-24 19:21:55.145519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.461 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.155476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.155667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.155702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.155720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.155735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.155774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.165351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.165491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.165525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.165544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.165559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.165598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.175348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.175557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.175592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.175610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.175625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.175664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.185408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.185560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.185595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.185613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.185628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.185666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.195462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.195611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.195645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.195671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.195687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.195726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.205481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.205615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.205655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.205673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.205688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.205726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.215526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.215700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.215733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.215752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.215767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.215805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.225540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.225680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.225714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.225732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.225748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.225786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.235546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.235691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.235725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.235743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.235759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.720 [2024-07-24 19:21:55.235797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.720 qpair failed and we were unable to recover it. 00:29:49.720 [2024-07-24 19:21:55.245596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.720 [2024-07-24 19:21:55.245768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.720 [2024-07-24 19:21:55.245802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.720 [2024-07-24 19:21:55.245823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.720 [2024-07-24 19:21:55.245840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.245878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.255614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.255753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.255787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.255806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.255821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.255859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.265619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.265756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.265791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.265810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.265825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.265863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.275661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.275801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.275835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.275854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.275869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.275907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.285678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.285851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.285892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.285912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.285927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.285964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.295717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.295862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.295896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.295913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.295928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.295966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.305768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.305934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.305968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.305986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.306001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.306039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.315772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.315910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.315943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.315962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.315977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.316015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.325787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.325916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.325949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.325968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.325982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.326030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.335848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.335995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.336028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.336046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.336061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.336099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.345889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.346022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.346056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.346074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.346089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.346126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.355870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.356043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.356076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.356094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.356109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.356147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.365952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.366089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.366123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.366141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.366157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.366194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.375990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.721 [2024-07-24 19:21:55.376192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.721 [2024-07-24 19:21:55.376231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.721 [2024-07-24 19:21:55.376251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.721 [2024-07-24 19:21:55.376266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.721 [2024-07-24 19:21:55.376304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.721 qpair failed and we were unable to recover it. 00:29:49.721 [2024-07-24 19:21:55.385959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.722 [2024-07-24 19:21:55.386116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.722 [2024-07-24 19:21:55.386150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.722 [2024-07-24 19:21:55.386169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.722 [2024-07-24 19:21:55.386184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.722 [2024-07-24 19:21:55.386221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.722 qpair failed and we were unable to recover it. 00:29:49.722 [2024-07-24 19:21:55.396008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.722 [2024-07-24 19:21:55.396148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.722 [2024-07-24 19:21:55.396183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.722 [2024-07-24 19:21:55.396202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.722 [2024-07-24 19:21:55.396217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.722 [2024-07-24 19:21:55.396256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.722 qpair failed and we were unable to recover it. 00:29:49.722 [2024-07-24 19:21:55.406033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.722 [2024-07-24 19:21:55.406169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.722 [2024-07-24 19:21:55.406202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.722 [2024-07-24 19:21:55.406221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.722 [2024-07-24 19:21:55.406236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.722 [2024-07-24 19:21:55.406274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.722 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.416090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.416231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.416265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.416284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.416306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.416346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.426099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.426232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.426265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.426284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.426299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.426337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.436133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.436281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.436315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.436334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.436350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.436388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.446163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.446310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.446344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.446363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.446378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.446416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.456212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.456350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.456384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.456402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.456417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.456467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.466256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.466439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.466474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.466493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.466508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.466546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.476254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.476382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.476417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.476445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.476461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.476501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.486288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.486448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.486483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.486502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.486516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.486554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.496324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.496491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.496524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.496543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.496559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.496597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.506346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.506496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.506530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.506549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.506582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.506622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.516436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.516568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.516602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.516621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.516637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.981 [2024-07-24 19:21:55.516675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.981 qpair failed and we were unable to recover it. 00:29:49.981 [2024-07-24 19:21:55.526420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.981 [2024-07-24 19:21:55.526563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.981 [2024-07-24 19:21:55.526598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.981 [2024-07-24 19:21:55.526616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.981 [2024-07-24 19:21:55.526631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.526669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.536464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.536615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.536649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.536668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.536683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.536721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.546521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.546716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.546750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.546768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.546784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.546823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.556538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.556678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.556713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.556732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.556747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.556785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.566532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.566683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.566718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.566736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.566751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.566789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.576562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.576701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.576734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.576753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.576768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.576806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.586618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.586803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.586837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.586856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.586871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.586909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.596578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.596711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.596744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.596771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.596787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.596824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.606613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.606736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.606770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.606789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.606804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.606843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.616677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.616821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.616855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.616874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.616890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.616927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.626668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.626808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.626842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.626861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.626877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.626915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.636721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.636894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.636926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.636944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.636959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.636998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.646733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.646866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.646900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.646919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.646934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.646971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.982 qpair failed and we were unable to recover it. 00:29:49.982 [2024-07-24 19:21:55.656784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.982 [2024-07-24 19:21:55.656917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.982 [2024-07-24 19:21:55.656959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.982 [2024-07-24 19:21:55.656978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.982 [2024-07-24 19:21:55.656993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.982 [2024-07-24 19:21:55.657030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.983 qpair failed and we were unable to recover it. 00:29:49.983 [2024-07-24 19:21:55.666817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:49.983 [2024-07-24 19:21:55.666990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:49.983 [2024-07-24 19:21:55.667032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:49.983 [2024-07-24 19:21:55.667050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:49.983 [2024-07-24 19:21:55.667065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:49.983 [2024-07-24 19:21:55.667103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:49.983 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-24 19:21:55.676880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.241 [2024-07-24 19:21:55.677021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.241 [2024-07-24 19:21:55.677056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.241 [2024-07-24 19:21:55.677074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.241 [2024-07-24 19:21:55.677090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.241 [2024-07-24 19:21:55.677128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-24 19:21:55.686846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.241 [2024-07-24 19:21:55.686982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.241 [2024-07-24 19:21:55.687023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.241 [2024-07-24 19:21:55.687043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.241 [2024-07-24 19:21:55.687058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.241 [2024-07-24 19:21:55.687095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.696903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.697043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.697077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.697095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.697111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.697150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.706922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.707058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.707093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.707111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.707127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.707165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.717016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.717150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.717184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.717202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.717217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.717255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.727020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.727185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.727219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.727239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.727254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.727298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.737075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.737222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.737255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.737273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.737289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.737327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.747087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.747250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.747285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.747303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.747318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.747356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.757159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.757289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.757320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.757338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.757354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.757392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.767145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.767279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.767313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.767331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.767346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.767385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.777143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.777289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.777328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.777347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.777362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.777400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.787177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.787315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.787348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.787367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.787382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.787419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.797207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.797336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.797370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.797389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.797404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.797451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.807214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.807340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.807374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.807392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.807408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.807455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.817264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.817411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.242 [2024-07-24 19:21:55.817457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.242 [2024-07-24 19:21:55.817477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.242 [2024-07-24 19:21:55.817493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.242 [2024-07-24 19:21:55.817537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-24 19:21:55.827286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.242 [2024-07-24 19:21:55.827437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.827471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.827490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.827506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.827544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.837346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.837502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.837536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.837554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.837569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.837607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.847331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.847474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.847508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.847527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.847542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.847580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.857423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.857614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.857647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.857666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.857682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.857720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.867413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.867588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.867622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.867640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.867656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.867694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.877464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.877613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.877646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.877665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.877680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.877718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.887460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.887603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.887635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.887653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.887668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.887705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.897563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.897705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.897738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.897756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.897771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.897808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.907549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.907692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.907725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.907743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.907766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.907805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.917575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.917736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.917769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.917787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.917803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.917841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-24 19:21:55.927579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.243 [2024-07-24 19:21:55.927717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.243 [2024-07-24 19:21:55.927751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.243 [2024-07-24 19:21:55.927770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.243 [2024-07-24 19:21:55.927785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.243 [2024-07-24 19:21:55.927822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.502 [2024-07-24 19:21:55.937632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.502 [2024-07-24 19:21:55.937774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.502 [2024-07-24 19:21:55.937807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.502 [2024-07-24 19:21:55.937826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.502 [2024-07-24 19:21:55.937841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.502 [2024-07-24 19:21:55.937879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.502 qpair failed and we were unable to recover it. 00:29:50.502 [2024-07-24 19:21:55.947640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.502 [2024-07-24 19:21:55.947782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.502 [2024-07-24 19:21:55.947817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.502 [2024-07-24 19:21:55.947835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.502 [2024-07-24 19:21:55.947850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.502 [2024-07-24 19:21:55.947888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.502 qpair failed and we were unable to recover it. 00:29:50.502 [2024-07-24 19:21:55.957706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.502 [2024-07-24 19:21:55.957845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.502 [2024-07-24 19:21:55.957880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.502 [2024-07-24 19:21:55.957898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.502 [2024-07-24 19:21:55.957913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.502 [2024-07-24 19:21:55.957951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.502 qpair failed and we were unable to recover it. 00:29:50.502 [2024-07-24 19:21:55.967689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.502 [2024-07-24 19:21:55.967829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.502 [2024-07-24 19:21:55.967863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.502 [2024-07-24 19:21:55.967881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.502 [2024-07-24 19:21:55.967896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.502 [2024-07-24 19:21:55.967934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.502 qpair failed and we were unable to recover it. 00:29:50.502 [2024-07-24 19:21:55.977746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.502 [2024-07-24 19:21:55.977899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.502 [2024-07-24 19:21:55.977933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.502 [2024-07-24 19:21:55.977951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.502 [2024-07-24 19:21:55.977966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.502 [2024-07-24 19:21:55.978004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:55.987749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:55.987886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:55.987920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:55.987938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:55.987953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:55.987991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:55.997824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:55.997958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:55.997991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:55.998017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:55.998033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:55.998071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.007834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.007969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.008003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.008022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.008037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.008074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.017883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.018039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.018073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.018092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.018107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.018144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.027888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.028018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.028052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.028072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.028088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.028126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.037954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.038144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.038177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.038195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.038210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.038248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.047925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.048064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.048097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.048115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.048130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.048167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.057975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.058150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.058183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.058202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.058217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.058254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.067983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.068119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.068153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.068171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.068186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.068224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.078062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.078241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.078275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.078294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.078309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.078347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.088040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.088170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.088212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.088233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.088248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.088286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.098117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.098263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.098296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.098314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.098329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.098367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.108119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.108275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.503 [2024-07-24 19:21:56.108309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.503 [2024-07-24 19:21:56.108327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.503 [2024-07-24 19:21:56.108342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.503 [2024-07-24 19:21:56.108380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.503 qpair failed and we were unable to recover it. 00:29:50.503 [2024-07-24 19:21:56.118133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.503 [2024-07-24 19:21:56.118274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.118308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.118326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.118341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.118379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.504 [2024-07-24 19:21:56.128178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.504 [2024-07-24 19:21:56.128317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.128351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.128369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.128385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.128422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.504 [2024-07-24 19:21:56.138254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.504 [2024-07-24 19:21:56.138392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.138423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.138457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.138473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.138512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.504 [2024-07-24 19:21:56.148253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.504 [2024-07-24 19:21:56.148399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.148442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.148463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.148478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.148516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.504 [2024-07-24 19:21:56.158255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.504 [2024-07-24 19:21:56.158391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.158423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.158459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.158475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.158514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.504 [2024-07-24 19:21:56.168275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.504 [2024-07-24 19:21:56.168410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.168451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.168470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.168485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.168523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.504 [2024-07-24 19:21:56.178347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.504 [2024-07-24 19:21:56.178514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.178554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.178574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.178589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.178628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.504 [2024-07-24 19:21:56.188368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.504 [2024-07-24 19:21:56.188510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.504 [2024-07-24 19:21:56.188554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.504 [2024-07-24 19:21:56.188574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.504 [2024-07-24 19:21:56.188589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.504 [2024-07-24 19:21:56.188628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.504 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.198374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.198538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.198573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.198591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.198607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.198645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.208490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.208626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.208659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.208678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.208692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.208730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.218482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.218655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.218688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.218707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.218722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.218767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.228461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.228634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.228668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.228686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.228702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.228740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.238504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.238663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.238697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.238715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.238730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.238769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.248510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.248651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.248684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.248703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.248718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.248755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.258578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.258722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.258755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.258773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.258788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.258826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.268568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.268718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.268759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.268779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.268794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.268831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.278598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.278737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.278771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.278790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.278805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.278842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.288665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.288798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.288832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.288851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.288866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.288903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-07-24 19:21:56.298697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.763 [2024-07-24 19:21:56.298851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.763 [2024-07-24 19:21:56.298884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.763 [2024-07-24 19:21:56.298903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.763 [2024-07-24 19:21:56.298918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.763 [2024-07-24 19:21:56.298956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.308673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.308840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.308873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.308891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.308914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.308952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.318719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.318852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.318885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.318903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.318918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.318956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.328741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.328880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.328912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.328931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.328946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.328983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.338789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.338944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.338977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.338995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.339010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.339048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.348793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.348937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.348971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.348989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.349004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.349040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.358827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.358991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.359024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.359043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.359058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.359095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.368866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.369003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.369036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.369054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.369069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.369106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.378912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.379099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.379132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.379150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.379165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.379203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.388906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.389051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.389084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.389102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.389117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.389155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.398936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.399095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.399128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.399154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.399170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.399208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.408963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.409094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.409129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.409147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.409162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.409200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.419021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.419167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.419201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.419219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.419234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.419271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.429029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.429164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.764 [2024-07-24 19:21:56.429198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.764 [2024-07-24 19:21:56.429215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.764 [2024-07-24 19:21:56.429231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.764 [2024-07-24 19:21:56.429268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-07-24 19:21:56.439051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.764 [2024-07-24 19:21:56.439203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.765 [2024-07-24 19:21:56.439236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.765 [2024-07-24 19:21:56.439254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.765 [2024-07-24 19:21:56.439279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.765 [2024-07-24 19:21:56.439317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-07-24 19:21:56.449102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:50.765 [2024-07-24 19:21:56.449259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:50.765 [2024-07-24 19:21:56.449293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:50.765 [2024-07-24 19:21:56.449312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:50.765 [2024-07-24 19:21:56.449327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:50.765 [2024-07-24 19:21:56.449365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.765 qpair failed and we were unable to recover it. 00:29:51.023 [2024-07-24 19:21:56.459132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.023 [2024-07-24 19:21:56.459270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.459303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.459322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.459337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.459375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.469164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.469329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.469363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.469381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.469396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.469445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.479165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.479299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.479333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.479352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.479367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.479404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.489245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.489385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.489418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.489453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.489470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.489508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.499286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.499485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.499531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.499550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.499565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.499605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.509296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.509456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.509491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.509509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.509524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.509562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.519301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.519475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.519508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.519527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.519542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.519581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.529334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.529480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.529515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.529533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.529549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.529587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.539410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.539600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.539633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.539651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.539666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.539704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.549388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.549533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.549568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.549586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.549602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.549639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.559424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.559589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.559623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.559641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.559656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.559694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.569449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.569584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.569618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.569636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.569652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.024 [2024-07-24 19:21:56.569690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.024 qpair failed and we were unable to recover it. 00:29:51.024 [2024-07-24 19:21:56.579493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.024 [2024-07-24 19:21:56.579639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.024 [2024-07-24 19:21:56.579684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.024 [2024-07-24 19:21:56.579704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.024 [2024-07-24 19:21:56.579719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.579758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.589513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.589661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.589694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.589713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.589727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.589764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.599516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.599677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.599711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.599730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.599745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.599782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.609637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.609773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.609807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.609825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.609840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.609877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.619625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.619770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.619802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.619821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.619836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.619881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.629632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.629763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.629797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.629815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.629831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.629868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.639672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.639805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.639837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.639855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.639870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.639908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.649677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.649811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.649845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.649864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.649879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.649917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.659721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.659859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.659892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.659910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.659925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.659963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.669750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.669887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.669928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.669947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.669962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.670001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.679773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.679913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.679947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.679966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.679981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.680019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.689807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.689943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.689977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.689995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.690011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.690047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.699871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.700016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.700049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.700068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.700083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.700121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.025 [2024-07-24 19:21:56.709859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.025 [2024-07-24 19:21:56.710005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.025 [2024-07-24 19:21:56.710038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.025 [2024-07-24 19:21:56.710057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.025 [2024-07-24 19:21:56.710079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.025 [2024-07-24 19:21:56.710118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.025 qpair failed and we were unable to recover it. 00:29:51.284 [2024-07-24 19:21:56.719920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.284 [2024-07-24 19:21:56.720073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.284 [2024-07-24 19:21:56.720106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.284 [2024-07-24 19:21:56.720124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.284 [2024-07-24 19:21:56.720139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.284 [2024-07-24 19:21:56.720178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.284 qpair failed and we were unable to recover it. 00:29:51.284 [2024-07-24 19:21:56.729908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.284 [2024-07-24 19:21:56.730055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.284 [2024-07-24 19:21:56.730089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.284 [2024-07-24 19:21:56.730108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.284 [2024-07-24 19:21:56.730123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.730161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.739991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.740141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.740174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.740193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.740208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.740246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.749972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.750110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.750144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.750162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.750177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.750215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.760017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.760160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.760192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.760210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.760226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.760265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.770035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.770165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.770199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.770219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.770234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.770272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.780082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.780216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.780250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.780269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.780284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.780322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.790113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.790275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.790308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.790327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.790342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.790380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.800135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.800301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.800335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.800353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.800375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.800415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.810164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.810329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.810363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.810382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.810397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.810443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.820219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.820375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.820408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.820435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.820454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.820492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.830217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.830357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.830390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.830408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.830424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.830483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.840286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.840474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.840509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.840527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.840543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.840581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.285 qpair failed and we were unable to recover it. 00:29:51.285 [2024-07-24 19:21:56.850296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.285 [2024-07-24 19:21:56.850444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.285 [2024-07-24 19:21:56.850479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.285 [2024-07-24 19:21:56.850498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.285 [2024-07-24 19:21:56.850513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.285 [2024-07-24 19:21:56.850551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.860485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.860641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.860674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.860693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.860709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.860747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.870393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.870547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.870580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.870599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.870613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.870651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.880472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.880639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.880673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.880691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.880706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.880745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.890479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.890618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.890651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.890675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.890691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.890729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.900463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.900638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.900672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.900690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.900705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.900744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.910486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.910625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.910659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.910679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.910694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.910731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.920504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.920634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.920668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.920687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.920702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.920740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.930653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.930794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.930827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.930846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.930861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.930899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.940652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.940791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.940825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.940844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.940859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.940896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.950677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.950815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.950850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.950868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.950883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.950922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.960632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.960820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.960854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.960872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.960888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.960926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.286 [2024-07-24 19:21:56.970640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.286 [2024-07-24 19:21:56.970781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.286 [2024-07-24 19:21:56.970815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.286 [2024-07-24 19:21:56.970833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.286 [2024-07-24 19:21:56.970848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.286 [2024-07-24 19:21:56.970885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.286 qpair failed and we were unable to recover it. 00:29:51.545 [2024-07-24 19:21:56.980750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.545 [2024-07-24 19:21:56.980915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.545 [2024-07-24 19:21:56.980955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.545 [2024-07-24 19:21:56.980974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.545 [2024-07-24 19:21:56.980989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.545 [2024-07-24 19:21:56.981027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.545 qpair failed and we were unable to recover it. 00:29:51.545 [2024-07-24 19:21:56.990733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.545 [2024-07-24 19:21:56.990868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.545 [2024-07-24 19:21:56.990902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.545 [2024-07-24 19:21:56.990920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.545 [2024-07-24 19:21:56.990936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.545 [2024-07-24 19:21:56.990973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.545 qpair failed and we were unable to recover it. 00:29:51.545 [2024-07-24 19:21:57.000717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.545 [2024-07-24 19:21:57.000849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.545 [2024-07-24 19:21:57.000882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.545 [2024-07-24 19:21:57.000901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.545 [2024-07-24 19:21:57.000916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.545 [2024-07-24 19:21:57.000954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.545 qpair failed and we were unable to recover it. 00:29:51.545 [2024-07-24 19:21:57.010789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.545 [2024-07-24 19:21:57.010945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.545 [2024-07-24 19:21:57.010979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.545 [2024-07-24 19:21:57.010997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.545 [2024-07-24 19:21:57.011012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.545 [2024-07-24 19:21:57.011050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.545 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.020839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.020983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.021016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.021034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.021050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.021095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.030892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.031026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.031061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.031079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.031095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.031133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.040867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.040997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.041031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.041049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.041064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.041104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.050938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.051076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.051109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.051128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.051147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.051184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.060932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.061105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.061140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.061159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.061174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.061211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.070940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.071080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.071121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.071141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.071156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.071194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.080990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.081137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.081171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.081189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.081205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.081243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.091023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.091158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.091192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.091210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.091226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.091264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.101037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.101178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.101211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.101229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.101244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.101283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.111090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.111253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.111287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.111305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.111328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.111367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.121156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:51.546 [2024-07-24 19:21:57.121292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:51.546 [2024-07-24 19:21:57.121330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:51.546 [2024-07-24 19:21:57.121349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:51.546 [2024-07-24 19:21:57.121364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5e10000b90 00:29:51.546 [2024-07-24 19:21:57.121402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:51.546 qpair failed and we were unable to recover it. 00:29:51.546 [2024-07-24 19:21:57.121486] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:51.546 A controller has encountered a failure and is being reset. 00:29:51.805 Controller properly reset. 00:29:54.332 Initializing NVMe Controllers 00:29:54.332 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.332 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:54.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:54.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:54.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:54.332 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:54.332 Initialization complete. Launching workers. 00:29:54.332 Starting thread on core 1 00:29:54.332 Starting thread on core 2 00:29:54.332 Starting thread on core 3 00:29:54.332 Starting thread on core 0 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:54.333 00:29:54.333 real 0m11.078s 00:29:54.333 user 0m25.487s 00:29:54.333 sys 0m6.348s 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:54.333 ************************************ 00:29:54.333 END TEST nvmf_target_disconnect_tc2 00:29:54.333 ************************************ 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:54.333 rmmod nvme_tcp 00:29:54.333 rmmod nvme_fabrics 00:29:54.333 rmmod nvme_keyring 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1777789 ']' 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1777789 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1777789 ']' 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1777789 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1777789 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1777789' 00:29:54.333 killing process with pid 1777789 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1777789 00:29:54.333 19:21:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1777789 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.901 19:22:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.806 19:22:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:56.806 00:29:56.806 real 0m17.115s 00:29:56.806 user 0m51.696s 00:29:56.806 sys 0m9.485s 00:29:56.806 19:22:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:56.806 19:22:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:56.806 ************************************ 00:29:56.806 END TEST nvmf_target_disconnect 00:29:56.806 ************************************ 00:29:57.082 19:22:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:57.082 00:29:57.082 real 6m10.204s 00:29:57.082 user 13m3.514s 00:29:57.082 sys 1m33.209s 00:29:57.082 19:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.082 19:22:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.082 ************************************ 00:29:57.082 END TEST nvmf_host 00:29:57.083 ************************************ 00:29:57.083 00:29:57.083 real 23m31.820s 00:29:57.083 user 55m19.896s 00:29:57.083 sys 5m56.409s 00:29:57.083 19:22:02 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.083 19:22:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.083 ************************************ 00:29:57.083 END TEST nvmf_tcp 00:29:57.083 ************************************ 00:29:57.083 19:22:02 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:29:57.083 19:22:02 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:57.083 19:22:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:57.083 19:22:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.083 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:29:57.083 ************************************ 00:29:57.083 START TEST spdkcli_nvmf_tcp 00:29:57.083 ************************************ 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:57.083 * Looking for test storage... 00:29:57.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1778990 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1778990 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1778990 ']' 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.083 19:22:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.083 [2024-07-24 19:22:02.739600] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:29:57.083 [2024-07-24 19:22:02.739694] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778990 ] 00:29:57.341 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.341 [2024-07-24 19:22:02.815775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:57.341 [2024-07-24 19:22:02.959457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.341 [2024-07-24 19:22:02.959481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:58.274 19:22:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:58.274 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:58.274 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:58.274 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:58.274 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:58.274 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:58.274 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:58.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:58.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:58.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:58.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:58.274 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:58.274 ' 00:30:01.557 [2024-07-24 19:22:06.588526] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.490 [2024-07-24 19:22:07.865272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:05.013 [2024-07-24 19:22:10.221022] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:06.914 [2024-07-24 19:22:12.259595] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:08.285 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:08.285 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:08.285 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:08.285 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:08.285 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:08.285 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:08.285 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:08.285 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:08.285 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:08.285 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:08.285 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:08.285 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:08.285 19:22:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.851 19:22:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:08.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:08.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:08.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:08.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:08.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:08.851 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:08.851 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:08.851 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:08.851 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:08.851 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:08.851 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:08.851 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:08.851 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:08.851 ' 00:30:15.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:15.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:15.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:15.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:15.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:15.412 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:15.412 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:15.412 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:15.412 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:15.412 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:15.412 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:15.412 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:15.412 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:15.412 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1778990 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1778990 ']' 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1778990 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1778990 00:30:15.412 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:15.413 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:15.413 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1778990' 00:30:15.413 killing process with pid 1778990 00:30:15.413 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1778990 00:30:15.413 19:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1778990 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1778990 ']' 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1778990 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1778990 ']' 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1778990 00:30:15.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1778990) - No such process 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1778990 is not found' 00:30:15.413 Process with pid 1778990 is not found 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:15.413 00:30:15.413 real 0m17.785s 00:30:15.413 user 0m38.225s 00:30:15.413 sys 0m1.063s 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:15.413 19:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.413 ************************************ 00:30:15.413 END TEST spdkcli_nvmf_tcp 00:30:15.413 ************************************ 00:30:15.413 19:22:20 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:15.413 19:22:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:15.413 19:22:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:15.413 19:22:20 -- common/autotest_common.sh@10 -- # set +x 00:30:15.413 ************************************ 00:30:15.413 START TEST nvmf_identify_passthru 00:30:15.413 ************************************ 00:30:15.413 19:22:20 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:15.413 * Looking for test storage... 00:30:15.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:15.413 19:22:20 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.413 19:22:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.413 19:22:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.413 19:22:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.413 19:22:20 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.413 19:22:20 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.413 19:22:20 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.413 19:22:20 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:15.413 19:22:20 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.413 19:22:20 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.413 19:22:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:15.413 19:22:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:15.413 19:22:20 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:15.413 19:22:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:17.947 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:17.947 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:17.947 Found net devices under 0000:84:00.0: cvl_0_0 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:17.947 Found net devices under 0000:84:00.1: cvl_0_1 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:17.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:30:17.947 00:30:17.947 --- 10.0.0.2 ping statistics --- 00:30:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.947 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:30:17.947 00:30:17.947 --- 10.0.0.1 ping statistics --- 00:30:17.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.947 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.947 19:22:23 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:17.947 19:22:23 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:17.947 19:22:23 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:30:17.947 19:22:23 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:30:17.947 19:22:23 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:30:17.947 19:22:23 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:30:17.947 19:22:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:30:17.947 19:22:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:17.947 19:22:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:18.206 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.396 19:22:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:30:22.396 19:22:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:30:22.396 19:22:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:22.396 19:22:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:22.396 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.614 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:26.614 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:26.614 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:26.614 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.887 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.887 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1783759 00:30:26.887 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:26.887 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.887 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1783759 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1783759 ']' 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.887 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.887 [2024-07-24 19:22:32.387110] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:30:26.887 [2024-07-24 19:22:32.387219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.887 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.887 [2024-07-24 19:22:32.482417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.146 [2024-07-24 19:22:32.701694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.146 [2024-07-24 19:22:32.701811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.146 [2024-07-24 19:22:32.701847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.146 [2024-07-24 19:22:32.701876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.146 [2024-07-24 19:22:32.701902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.146 [2024-07-24 19:22:32.702061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.146 [2024-07-24 19:22:32.702125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.146 [2024-07-24 19:22:32.702154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:27.146 [2024-07-24 19:22:32.702158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.146 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:27.146 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:30:27.146 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:27.146 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.146 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.146 INFO: Log level set to 20 00:30:27.146 INFO: Requests: 00:30:27.146 { 00:30:27.146 "jsonrpc": "2.0", 00:30:27.146 "method": "nvmf_set_config", 00:30:27.146 "id": 1, 00:30:27.146 "params": { 00:30:27.146 "admin_cmd_passthru": { 00:30:27.146 "identify_ctrlr": true 00:30:27.146 } 00:30:27.146 } 00:30:27.146 } 00:30:27.146 00:30:27.405 INFO: response: 00:30:27.405 { 00:30:27.405 "jsonrpc": "2.0", 00:30:27.405 "id": 1, 00:30:27.405 "result": true 00:30:27.405 } 00:30:27.405 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.405 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.405 INFO: Setting log level to 20 00:30:27.405 INFO: Setting log level to 20 00:30:27.405 INFO: Log level set to 20 00:30:27.405 INFO: Log level set to 20 00:30:27.405 INFO: Requests: 00:30:27.405 { 00:30:27.405 "jsonrpc": "2.0", 00:30:27.405 "method": "framework_start_init", 00:30:27.405 "id": 1 00:30:27.405 } 00:30:27.405 00:30:27.405 INFO: Requests: 00:30:27.405 { 00:30:27.405 "jsonrpc": "2.0", 00:30:27.405 "method": "framework_start_init", 00:30:27.405 "id": 1 00:30:27.405 } 00:30:27.405 00:30:27.405 [2024-07-24 19:22:32.974074] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:27.405 INFO: response: 00:30:27.405 { 00:30:27.405 "jsonrpc": "2.0", 00:30:27.405 "id": 1, 00:30:27.405 "result": true 00:30:27.405 } 00:30:27.405 00:30:27.405 INFO: response: 00:30:27.405 { 00:30:27.405 "jsonrpc": "2.0", 00:30:27.405 "id": 1, 00:30:27.405 "result": true 00:30:27.405 } 00:30:27.405 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.405 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.405 INFO: Setting log level to 40 00:30:27.405 INFO: Setting log level to 40 00:30:27.405 INFO: Setting log level to 40 00:30:27.405 [2024-07-24 19:22:32.988808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:27.405 19:22:32 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:27.405 19:22:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.405 19:22:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:30:27.405 19:22:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:27.405 19:22:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.687 Nvme0n1 00:30:30.687 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.687 19:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:30.687 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.688 19:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.688 19:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.688 [2024-07-24 19:22:35.901683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.688 19:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.688 [ 00:30:30.688 { 00:30:30.688 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:30.688 "subtype": "Discovery", 00:30:30.688 "listen_addresses": [], 00:30:30.688 "allow_any_host": true, 00:30:30.688 "hosts": [] 00:30:30.688 }, 00:30:30.688 { 00:30:30.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:30.688 "subtype": "NVMe", 00:30:30.688 "listen_addresses": [ 00:30:30.688 { 00:30:30.688 "trtype": "TCP", 00:30:30.688 "adrfam": "IPv4", 00:30:30.688 "traddr": "10.0.0.2", 00:30:30.688 "trsvcid": "4420" 00:30:30.688 } 00:30:30.688 ], 00:30:30.688 "allow_any_host": true, 00:30:30.688 "hosts": [], 00:30:30.688 "serial_number": "SPDK00000000000001", 00:30:30.688 "model_number": "SPDK bdev Controller", 00:30:30.688 "max_namespaces": 1, 00:30:30.688 "min_cntlid": 1, 00:30:30.688 "max_cntlid": 65519, 00:30:30.688 "namespaces": [ 00:30:30.688 { 00:30:30.688 "nsid": 1, 00:30:30.688 "bdev_name": "Nvme0n1", 00:30:30.688 "name": "Nvme0n1", 00:30:30.688 "nguid": "EA5F2F92A1D945FEB8EC0AEBB7FA7A9B", 00:30:30.688 "uuid": "ea5f2f92-a1d9-45fe-b8ec-0aebb7fa7a9b" 00:30:30.688 } 00:30:30.688 ] 00:30:30.688 } 00:30:30.688 ] 00:30:30.688 19:22:35 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.688 19:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:30.688 19:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:30.688 19:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:30.688 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:30.688 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:30.688 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:30.688 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:30.688 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:30.688 19:22:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:30.688 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:30.688 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:30.688 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:30.688 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:30.688 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:30.688 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:30.688 rmmod nvme_tcp 00:30:30.688 rmmod nvme_fabrics 00:30:30.688 rmmod nvme_keyring 00:30:30.946 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:30.946 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:30.946 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:30.946 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1783759 ']' 00:30:30.946 19:22:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1783759 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1783759 ']' 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1783759 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1783759 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1783759' 00:30:30.946 killing process with pid 1783759 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1783759 00:30:30.946 19:22:36 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1783759 00:30:32.847 19:22:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:32.847 19:22:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:32.847 19:22:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:32.847 19:22:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:32.847 19:22:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:32.847 19:22:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.847 19:22:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:32.847 19:22:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.765 19:22:40 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:34.765 00:30:34.765 real 0m19.810s 00:30:34.765 user 0m28.525s 00:30:34.765 sys 0m3.439s 00:30:34.765 19:22:40 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:34.765 19:22:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:34.765 ************************************ 00:30:34.765 END TEST nvmf_identify_passthru 00:30:34.765 ************************************ 00:30:34.765 19:22:40 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:34.765 19:22:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:34.765 19:22:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:34.765 19:22:40 -- common/autotest_common.sh@10 -- # set +x 00:30:34.765 ************************************ 00:30:34.765 START TEST nvmf_dif 00:30:34.765 ************************************ 00:30:34.765 19:22:40 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:34.765 * Looking for test storage... 00:30:34.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:34.765 19:22:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.765 19:22:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.765 19:22:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.765 19:22:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.765 19:22:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.765 19:22:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.765 19:22:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.765 19:22:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:34.765 19:22:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:34.765 19:22:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:34.765 19:22:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:34.765 19:22:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:34.765 19:22:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:34.765 19:22:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:34.765 19:22:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.766 19:22:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:34.766 19:22:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.766 19:22:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:34.766 19:22:40 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:34.766 19:22:40 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:34.766 19:22:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:38.053 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:38.053 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:38.053 Found net devices under 0000:84:00.0: cvl_0_0 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:38.053 Found net devices under 0000:84:00.1: cvl_0_1 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:38.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:30:38.053 00:30:38.053 --- 10.0.0.2 ping statistics --- 00:30:38.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.053 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:30:38.053 00:30:38.053 --- 10.0.0.1 ping statistics --- 00:30:38.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.053 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:38.053 19:22:43 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:39.433 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:39.433 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:39.433 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:39.433 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:39.433 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:39.433 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:39.433 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:39.433 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:39.433 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:39.433 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:39.433 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:39.433 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:39.433 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:39.433 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:39.433 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:39.433 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:39.433 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:39.692 19:22:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:39.692 19:22:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1787170 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:39.692 19:22:45 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1787170 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1787170 ']' 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:39.692 19:22:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.692 [2024-07-24 19:22:45.241811] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:30:39.692 [2024-07-24 19:22:45.241920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.692 EAL: No free 2048 kB hugepages reported on node 1 00:30:39.692 [2024-07-24 19:22:45.357689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.951 [2024-07-24 19:22:45.554821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.951 [2024-07-24 19:22:45.554930] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.951 [2024-07-24 19:22:45.554967] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.951 [2024-07-24 19:22:45.554997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.951 [2024-07-24 19:22:45.555023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.951 [2024-07-24 19:22:45.555087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.887 19:22:46 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:40.887 19:22:46 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:30:40.887 19:22:46 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:40.887 19:22:46 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:40.887 19:22:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.146 19:22:46 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.146 19:22:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:41.146 19:22:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:41.146 19:22:46 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.146 19:22:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.146 [2024-07-24 19:22:46.606658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.146 19:22:46 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.146 19:22:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:41.146 19:22:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:41.146 19:22:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:41.146 19:22:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.146 ************************************ 00:30:41.146 START TEST fio_dif_1_default 00:30:41.146 ************************************ 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.146 bdev_null0 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.146 [2024-07-24 19:22:46.687954] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:41.146 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:41.147 { 00:30:41.147 "params": { 00:30:41.147 "name": "Nvme$subsystem", 00:30:41.147 "trtype": "$TEST_TRANSPORT", 00:30:41.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.147 "adrfam": "ipv4", 00:30:41.147 "trsvcid": "$NVMF_PORT", 00:30:41.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.147 "hdgst": ${hdgst:-false}, 00:30:41.147 "ddgst": ${ddgst:-false} 00:30:41.147 }, 00:30:41.147 "method": "bdev_nvme_attach_controller" 00:30:41.147 } 00:30:41.147 EOF 00:30:41.147 )") 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:41.147 "params": { 00:30:41.147 "name": "Nvme0", 00:30:41.147 "trtype": "tcp", 00:30:41.147 "traddr": "10.0.0.2", 00:30:41.147 "adrfam": "ipv4", 00:30:41.147 "trsvcid": "4420", 00:30:41.147 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:41.147 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:41.147 "hdgst": false, 00:30:41.147 "ddgst": false 00:30:41.147 }, 00:30:41.147 "method": "bdev_nvme_attach_controller" 00:30:41.147 }' 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:41.147 19:22:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.406 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:41.406 fio-3.35 00:30:41.406 Starting 1 thread 00:30:41.406 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.613 00:30:53.613 filename0: (groupid=0, jobs=1): err= 0: pid=1787538: Wed Jul 24 19:22:57 2024 00:30:53.613 read: IOPS=184, BW=740KiB/s (757kB/s)(7424KiB/10039msec) 00:30:53.613 slat (usec): min=6, max=107, avg=12.77, stdev= 4.43 00:30:53.613 clat (usec): min=812, max=47837, avg=21595.84, stdev=20292.23 00:30:53.613 lat (usec): min=822, max=47875, avg=21608.62, stdev=20291.92 00:30:53.613 clat percentiles (usec): 00:30:53.613 | 1.00th=[ 848], 5.00th=[ 1090], 10.00th=[ 1205], 20.00th=[ 1237], 00:30:53.613 | 30.00th=[ 1254], 40.00th=[ 1303], 50.00th=[41157], 60.00th=[41681], 00:30:53.613 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:30:53.613 | 99.00th=[42730], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:30:53.613 | 99.99th=[47973] 00:30:53.613 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=740.80, stdev=33.28, samples=20 00:30:53.613 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:30:53.613 lat (usec) : 1000=3.50% 00:30:53.613 lat (msec) : 2=46.28%, 50=50.22% 00:30:53.613 cpu : usr=89.35%, sys=10.25%, ctx=15, majf=0, minf=257 00:30:53.613 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.613 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.613 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:53.613 00:30:53.613 Run status group 0 (all jobs): 00:30:53.613 READ: bw=740KiB/s (757kB/s), 740KiB/s-740KiB/s (757kB/s-757kB/s), io=7424KiB (7602kB), run=10039-10039msec 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 00:30:53.613 real 0m11.482s 00:30:53.613 user 0m10.462s 00:30:53.613 sys 0m1.406s 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 ************************************ 00:30:53.613 END TEST fio_dif_1_default 00:30:53.613 ************************************ 00:30:53.613 19:22:58 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:53.613 19:22:58 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:53.613 19:22:58 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 ************************************ 00:30:53.613 START TEST fio_dif_1_multi_subsystems 00:30:53.613 ************************************ 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 bdev_null0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 [2024-07-24 19:22:58.248874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 bdev_null1 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.613 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.614 { 00:30:53.614 "params": { 00:30:53.614 "name": "Nvme$subsystem", 00:30:53.614 "trtype": "$TEST_TRANSPORT", 00:30:53.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.614 "adrfam": "ipv4", 00:30:53.614 "trsvcid": "$NVMF_PORT", 00:30:53.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.614 "hdgst": ${hdgst:-false}, 00:30:53.614 "ddgst": ${ddgst:-false} 00:30:53.614 }, 00:30:53.614 "method": "bdev_nvme_attach_controller" 00:30:53.614 } 00:30:53.614 EOF 00:30:53.614 )") 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.614 { 00:30:53.614 "params": { 00:30:53.614 "name": "Nvme$subsystem", 00:30:53.614 "trtype": "$TEST_TRANSPORT", 00:30:53.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.614 "adrfam": "ipv4", 00:30:53.614 "trsvcid": "$NVMF_PORT", 00:30:53.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.614 "hdgst": ${hdgst:-false}, 00:30:53.614 "ddgst": ${ddgst:-false} 00:30:53.614 }, 00:30:53.614 "method": "bdev_nvme_attach_controller" 00:30:53.614 } 00:30:53.614 EOF 00:30:53.614 )") 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.614 "params": { 00:30:53.614 "name": "Nvme0", 00:30:53.614 "trtype": "tcp", 00:30:53.614 "traddr": "10.0.0.2", 00:30:53.614 "adrfam": "ipv4", 00:30:53.614 "trsvcid": "4420", 00:30:53.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.614 "hdgst": false, 00:30:53.614 "ddgst": false 00:30:53.614 }, 00:30:53.614 "method": "bdev_nvme_attach_controller" 00:30:53.614 },{ 00:30:53.614 "params": { 00:30:53.614 "name": "Nvme1", 00:30:53.614 "trtype": "tcp", 00:30:53.614 "traddr": "10.0.0.2", 00:30:53.614 "adrfam": "ipv4", 00:30:53.614 "trsvcid": "4420", 00:30:53.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:53.614 "hdgst": false, 00:30:53.614 "ddgst": false 00:30:53.614 }, 00:30:53.614 "method": "bdev_nvme_attach_controller" 00:30:53.614 }' 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.614 19:22:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.614 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.614 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.614 fio-3.35 00:30:53.614 Starting 2 threads 00:30:53.614 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.824 00:31:05.824 filename0: (groupid=0, jobs=1): err= 0: pid=1788936: Wed Jul 24 19:23:09 2024 00:31:05.824 read: IOPS=183, BW=733KiB/s (750kB/s)(7344KiB/10022msec) 00:31:05.824 slat (nsec): min=5860, max=54971, avg=18409.03, stdev=8201.36 00:31:05.824 clat (usec): min=762, max=43684, avg=21778.04, stdev=20390.16 00:31:05.824 lat (usec): min=771, max=43716, avg=21796.45, stdev=20389.73 00:31:05.824 clat percentiles (usec): 00:31:05.824 | 1.00th=[ 922], 5.00th=[ 1172], 10.00th=[ 1205], 20.00th=[ 1270], 00:31:05.824 | 30.00th=[ 1401], 40.00th=[ 1450], 50.00th=[41157], 60.00th=[41681], 00:31:05.824 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:31:05.824 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:31:05.824 | 99.99th=[43779] 00:31:05.824 bw ( KiB/s): min= 704, max= 768, per=65.88%, avg=732.80, stdev=32.67, samples=20 00:31:05.824 iops : min= 176, max= 192, avg=183.20, stdev= 8.17, samples=20 00:31:05.824 lat (usec) : 1000=1.85% 00:31:05.824 lat (msec) : 2=46.08%, 4=1.96%, 50=50.11% 00:31:05.824 cpu : usr=93.92%, sys=5.50%, ctx=31, majf=0, minf=126 00:31:05.824 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.824 issued rwts: total=1836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.824 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:05.824 filename1: (groupid=0, jobs=1): err= 0: pid=1788937: Wed Jul 24 19:23:09 2024 00:31:05.824 read: IOPS=94, BW=378KiB/s (387kB/s)(3792KiB/10022msec) 00:31:05.824 slat (usec): min=6, max=224, avg=18.62, stdev=11.18 00:31:05.824 clat (usec): min=40921, max=44338, avg=42228.90, stdev=581.19 00:31:05.824 lat (usec): min=40932, max=44358, avg=42247.52, stdev=583.92 00:31:05.824 clat percentiles (usec): 00:31:05.824 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:31:05.824 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:05.824 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43779], 00:31:05.824 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:31:05.824 | 99.99th=[44303] 00:31:05.824 bw ( KiB/s): min= 352, max= 384, per=33.93%, avg=377.60, stdev=13.13, samples=20 00:31:05.824 iops : min= 88, max= 96, avg=94.40, stdev= 3.28, samples=20 00:31:05.824 lat (msec) : 50=100.00% 00:31:05.824 cpu : usr=94.96%, sys=4.53%, ctx=13, majf=0, minf=205 00:31:05.824 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.824 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.824 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.824 issued rwts: total=948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.824 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:05.824 00:31:05.824 Run status group 0 (all jobs): 00:31:05.824 READ: bw=1111KiB/s (1138kB/s), 378KiB/s-733KiB/s (387kB/s-750kB/s), io=10.9MiB (11.4MB), run=10022-10022msec 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.824 00:31:05.824 real 0m11.595s 00:31:05.824 user 0m20.422s 00:31:05.824 sys 0m1.400s 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:05.824 19:23:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:05.824 ************************************ 00:31:05.824 END TEST fio_dif_1_multi_subsystems 00:31:05.824 ************************************ 00:31:05.824 19:23:09 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:05.824 19:23:09 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:05.824 19:23:09 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:05.824 19:23:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:05.824 ************************************ 00:31:05.824 START TEST fio_dif_rand_params 00:31:05.824 ************************************ 00:31:05.824 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.825 bdev_null0 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:05.825 [2024-07-24 19:23:09.915113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:05.825 { 00:31:05.825 "params": { 00:31:05.825 "name": "Nvme$subsystem", 00:31:05.825 "trtype": "$TEST_TRANSPORT", 00:31:05.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.825 "adrfam": "ipv4", 00:31:05.825 "trsvcid": "$NVMF_PORT", 00:31:05.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.825 "hdgst": ${hdgst:-false}, 00:31:05.825 "ddgst": ${ddgst:-false} 00:31:05.825 }, 00:31:05.825 "method": "bdev_nvme_attach_controller" 00:31:05.825 } 00:31:05.825 EOF 00:31:05.825 )") 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:05.825 "params": { 00:31:05.825 "name": "Nvme0", 00:31:05.825 "trtype": "tcp", 00:31:05.825 "traddr": "10.0.0.2", 00:31:05.825 "adrfam": "ipv4", 00:31:05.825 "trsvcid": "4420", 00:31:05.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:05.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:05.825 "hdgst": false, 00:31:05.825 "ddgst": false 00:31:05.825 }, 00:31:05.825 "method": "bdev_nvme_attach_controller" 00:31:05.825 }' 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:05.825 19:23:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:05.825 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:05.825 ... 00:31:05.825 fio-3.35 00:31:05.825 Starting 3 threads 00:31:05.825 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.094 00:31:11.094 filename0: (groupid=0, jobs=1): err= 0: pid=1790215: Wed Jul 24 19:23:16 2024 00:31:11.094 read: IOPS=117, BW=14.6MiB/s (15.3MB/s)(73.2MiB/5008msec) 00:31:11.094 slat (nsec): min=5933, max=51835, avg=17772.90, stdev=6128.58 00:31:11.094 clat (usec): min=8947, max=73227, avg=25602.27, stdev=11488.04 00:31:11.094 lat (usec): min=8962, max=73236, avg=25620.04, stdev=11486.71 00:31:11.094 clat percentiles (usec): 00:31:11.094 | 1.00th=[13698], 5.00th=[16188], 10.00th=[17695], 20.00th=[20055], 00:31:11.094 | 30.00th=[21365], 40.00th=[22414], 50.00th=[23200], 60.00th=[24249], 00:31:11.094 | 70.00th=[25297], 80.00th=[26608], 90.00th=[28967], 95.00th=[63701], 00:31:11.094 | 99.00th=[68682], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:31:11.094 | 99.99th=[72877] 00:31:11.094 bw ( KiB/s): min= 8448, max=17408, per=33.65%, avg=14950.40, stdev=2825.55, samples=10 00:31:11.094 iops : min= 66, max= 136, avg=116.80, stdev=22.07, samples=10 00:31:11.094 lat (msec) : 10=0.17%, 20=19.97%, 50=73.21%, 100=6.66% 00:31:11.094 cpu : usr=94.35%, sys=5.17%, ctx=13, majf=0, minf=51 00:31:11.094 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.094 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:11.094 filename0: (groupid=0, jobs=1): err= 0: pid=1790216: Wed Jul 24 19:23:16 2024 00:31:11.094 read: IOPS=114, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5007msec) 00:31:11.094 slat (usec): min=6, max=125, avg=19.24, stdev= 8.67 00:31:11.094 clat (usec): min=9748, max=70205, avg=26140.84, stdev=8101.07 00:31:11.094 lat (usec): min=9762, max=70220, avg=26160.08, stdev=8101.23 00:31:11.094 clat percentiles (usec): 00:31:11.094 | 1.00th=[13829], 5.00th=[16188], 10.00th=[17695], 20.00th=[20055], 00:31:11.094 | 30.00th=[22414], 40.00th=[23987], 50.00th=[25560], 60.00th=[26608], 00:31:11.094 | 70.00th=[27919], 80.00th=[30540], 90.00th=[34866], 95.00th=[36439], 00:31:11.094 | 99.00th=[65274], 99.50th=[66847], 99.90th=[69731], 99.95th=[69731], 00:31:11.094 | 99.99th=[69731] 00:31:11.094 bw ( KiB/s): min=10752, max=17664, per=32.91%, avg=14617.60, stdev=2163.67, samples=10 00:31:11.094 iops : min= 84, max= 138, avg=114.20, stdev=16.90, samples=10 00:31:11.094 lat (msec) : 10=0.17%, 20=19.34%, 50=78.40%, 100=2.09% 00:31:11.094 cpu : usr=92.31%, sys=5.79%, ctx=194, majf=0, minf=140 00:31:11.094 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.094 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:11.094 filename0: (groupid=0, jobs=1): err= 0: pid=1790217: Wed Jul 24 19:23:16 2024 00:31:11.094 read: IOPS=115, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5007msec) 00:31:11.094 slat (nsec): min=6662, max=85383, avg=24347.24, stdev=12231.02 00:31:11.094 clat (usec): min=7295, max=65896, avg=25950.86, stdev=6321.89 00:31:11.094 lat (usec): min=7310, max=65912, avg=25975.21, stdev=6324.53 00:31:11.094 clat percentiles (usec): 00:31:11.094 | 1.00th=[11731], 5.00th=[16712], 10.00th=[18482], 20.00th=[21627], 00:31:11.094 | 30.00th=[23725], 40.00th=[25035], 50.00th=[26084], 60.00th=[26870], 00:31:11.094 | 70.00th=[28181], 80.00th=[29754], 90.00th=[32375], 95.00th=[34341], 00:31:11.094 | 99.00th=[61604], 99.50th=[61604], 99.90th=[65799], 99.95th=[65799], 00:31:11.094 | 99.99th=[65799] 00:31:11.094 bw ( KiB/s): min=12288, max=17920, per=33.14%, avg=14723.10, stdev=1635.55, samples=10 00:31:11.094 iops : min= 96, max= 140, avg=115.00, stdev=12.76, samples=10 00:31:11.094 lat (msec) : 10=0.17%, 20=14.19%, 50=84.60%, 100=1.04% 00:31:11.094 cpu : usr=93.37%, sys=5.91%, ctx=45, majf=0, minf=112 00:31:11.094 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:11.094 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.094 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:11.094 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:11.094 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:11.094 00:31:11.094 Run status group 0 (all jobs): 00:31:11.094 READ: bw=43.4MiB/s (45.5MB/s), 14.3MiB/s-14.6MiB/s (15.0MB/s-15.3MB/s), io=217MiB (228MB), run=5007-5008msec 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.094 bdev_null0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.094 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 [2024-07-24 19:23:16.468923] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 bdev_null1 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 bdev_null2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.095 { 00:31:11.095 "params": { 00:31:11.095 "name": "Nvme$subsystem", 00:31:11.095 "trtype": "$TEST_TRANSPORT", 00:31:11.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.095 "adrfam": "ipv4", 00:31:11.095 "trsvcid": "$NVMF_PORT", 00:31:11.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.095 "hdgst": ${hdgst:-false}, 00:31:11.095 "ddgst": ${ddgst:-false} 00:31:11.095 }, 00:31:11.095 "method": "bdev_nvme_attach_controller" 00:31:11.095 } 00:31:11.095 EOF 00:31:11.095 )") 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.095 { 00:31:11.095 "params": { 00:31:11.095 "name": "Nvme$subsystem", 00:31:11.095 "trtype": "$TEST_TRANSPORT", 00:31:11.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.095 "adrfam": "ipv4", 00:31:11.095 "trsvcid": "$NVMF_PORT", 00:31:11.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.095 "hdgst": ${hdgst:-false}, 00:31:11.095 "ddgst": ${ddgst:-false} 00:31:11.095 }, 00:31:11.095 "method": "bdev_nvme_attach_controller" 00:31:11.095 } 00:31:11.095 EOF 00:31:11.095 )") 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.095 { 00:31:11.095 "params": { 00:31:11.095 "name": "Nvme$subsystem", 00:31:11.095 "trtype": "$TEST_TRANSPORT", 00:31:11.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.095 "adrfam": "ipv4", 00:31:11.095 "trsvcid": "$NVMF_PORT", 00:31:11.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.095 "hdgst": ${hdgst:-false}, 00:31:11.095 "ddgst": ${ddgst:-false} 00:31:11.095 }, 00:31:11.095 "method": "bdev_nvme_attach_controller" 00:31:11.095 } 00:31:11.095 EOF 00:31:11.095 )") 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:11.095 19:23:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:11.095 "params": { 00:31:11.095 "name": "Nvme0", 00:31:11.095 "trtype": "tcp", 00:31:11.095 "traddr": "10.0.0.2", 00:31:11.095 "adrfam": "ipv4", 00:31:11.095 "trsvcid": "4420", 00:31:11.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:11.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:11.095 "hdgst": false, 00:31:11.095 "ddgst": false 00:31:11.095 }, 00:31:11.095 "method": "bdev_nvme_attach_controller" 00:31:11.095 },{ 00:31:11.095 "params": { 00:31:11.095 "name": "Nvme1", 00:31:11.095 "trtype": "tcp", 00:31:11.095 "traddr": "10.0.0.2", 00:31:11.095 "adrfam": "ipv4", 00:31:11.095 "trsvcid": "4420", 00:31:11.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:11.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:11.095 "hdgst": false, 00:31:11.095 "ddgst": false 00:31:11.095 }, 00:31:11.095 "method": "bdev_nvme_attach_controller" 00:31:11.095 },{ 00:31:11.095 "params": { 00:31:11.095 "name": "Nvme2", 00:31:11.095 "trtype": "tcp", 00:31:11.095 "traddr": "10.0.0.2", 00:31:11.095 "adrfam": "ipv4", 00:31:11.095 "trsvcid": "4420", 00:31:11.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:11.096 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:11.096 "hdgst": false, 00:31:11.096 "ddgst": false 00:31:11.096 }, 00:31:11.096 "method": "bdev_nvme_attach_controller" 00:31:11.096 }' 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:11.096 19:23:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.355 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:11.355 ... 00:31:11.355 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:11.355 ... 00:31:11.355 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:11.355 ... 00:31:11.355 fio-3.35 00:31:11.355 Starting 24 threads 00:31:11.355 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.562 00:31:23.562 filename0: (groupid=0, jobs=1): err= 0: pid=1791074: Wed Jul 24 19:23:28 2024 00:31:23.562 read: IOPS=365, BW=1461KiB/s (1496kB/s)(14.3MiB/10034msec) 00:31:23.562 slat (usec): min=7, max=123, avg=58.88, stdev=32.95 00:31:23.562 clat (usec): min=19325, max=63161, avg=43283.83, stdev=2537.66 00:31:23.562 lat (usec): min=19333, max=63209, avg=43342.72, stdev=2535.52 00:31:23.562 clat percentiles (usec): 00:31:23.562 | 1.00th=[41157], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:31:23.562 | 30.00th=[42730], 40.00th=[42730], 50.00th=[42730], 60.00th=[43254], 00:31:23.562 | 70.00th=[43254], 80.00th=[43779], 90.00th=[44827], 95.00th=[45876], 00:31:23.562 | 99.00th=[51119], 99.50th=[57410], 99.90th=[63177], 99.95th=[63177], 00:31:23.562 | 99.99th=[63177] 00:31:23.562 bw ( KiB/s): min= 1408, max= 1536, per=4.19%, avg=1459.20, stdev=64.34, samples=20 00:31:23.562 iops : min= 352, max= 384, avg=364.80, stdev=16.08, samples=20 00:31:23.562 lat (msec) : 20=0.44%, 50=98.25%, 100=1.31% 00:31:23.562 cpu : usr=94.25%, sys=3.00%, ctx=204, majf=0, minf=76 00:31:23.562 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:23.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 issued rwts: total=3664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.562 filename0: (groupid=0, jobs=1): err= 0: pid=1791075: Wed Jul 24 19:23:28 2024 00:31:23.562 read: IOPS=364, BW=1460KiB/s (1495kB/s)(14.3MiB/10041msec) 00:31:23.562 slat (usec): min=8, max=119, avg=35.94, stdev=15.62 00:31:23.562 clat (usec): min=26867, max=59951, avg=43558.43, stdev=2129.74 00:31:23.562 lat (usec): min=26918, max=60004, avg=43594.36, stdev=2129.29 00:31:23.562 clat percentiles (usec): 00:31:23.562 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[42730], 00:31:23.562 | 30.00th=[42730], 40.00th=[43254], 50.00th=[43254], 60.00th=[43254], 00:31:23.562 | 70.00th=[43779], 80.00th=[43779], 90.00th=[44827], 95.00th=[45876], 00:31:23.562 | 99.00th=[53216], 99.50th=[55837], 99.90th=[59507], 99.95th=[60031], 00:31:23.562 | 99.99th=[60031] 00:31:23.562 bw ( KiB/s): min= 1408, max= 1536, per=4.19%, avg=1459.20, stdev=64.34, samples=20 00:31:23.562 iops : min= 352, max= 384, avg=364.80, stdev=16.08, samples=20 00:31:23.562 lat (msec) : 50=98.64%, 100=1.36% 00:31:23.562 cpu : usr=97.97%, sys=1.47%, ctx=41, majf=0, minf=48 00:31:23.562 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 issued rwts: total=3664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.562 filename0: (groupid=0, jobs=1): err= 0: pid=1791076: Wed Jul 24 19:23:28 2024 00:31:23.562 read: IOPS=363, BW=1454KiB/s (1489kB/s)(14.2MiB/10037msec) 00:31:23.562 slat (usec): min=9, max=105, avg=41.53, stdev=13.63 00:31:23.562 clat (usec): min=31524, max=70090, avg=43669.73, stdev=2279.97 00:31:23.562 lat (usec): min=31569, max=70113, avg=43711.26, stdev=2278.13 00:31:23.562 clat percentiles (usec): 00:31:23.562 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42730], 20.00th=[42730], 00:31:23.562 | 30.00th=[42730], 40.00th=[42730], 50.00th=[43254], 60.00th=[43254], 00:31:23.562 | 70.00th=[43254], 80.00th=[43779], 90.00th=[45351], 95.00th=[45876], 00:31:23.562 | 99.00th=[55837], 99.50th=[60031], 99.90th=[62653], 99.95th=[69731], 00:31:23.562 | 99.99th=[69731] 00:31:23.562 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1452.80, stdev=75.15, samples=20 00:31:23.562 iops : min= 320, max= 384, avg=363.20, stdev=18.79, samples=20 00:31:23.562 lat (msec) : 50=98.19%, 100=1.81% 00:31:23.562 cpu : usr=97.75%, sys=1.55%, ctx=79, majf=0, minf=49 00:31:23.562 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.562 filename0: (groupid=0, jobs=1): err= 0: pid=1791077: Wed Jul 24 19:23:28 2024 00:31:23.562 read: IOPS=361, BW=1446KiB/s (1481kB/s)(14.1MiB/10001msec) 00:31:23.562 slat (usec): min=9, max=166, avg=49.81, stdev=21.52 00:31:23.562 clat (msec): min=31, max=134, avg=43.79, stdev= 6.23 00:31:23.562 lat (msec): min=31, max=134, avg=43.84, stdev= 6.23 00:31:23.562 clat percentiles (msec): 00:31:23.562 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.562 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.562 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 46], 00:31:23.562 | 99.00th=[ 54], 99.50th=[ 56], 99.90th=[ 136], 99.95th=[ 136], 00:31:23.562 | 99.99th=[ 136] 00:31:23.562 bw ( KiB/s): min= 1024, max= 1536, per=4.14%, avg=1441.68, stdev=119.48, samples=19 00:31:23.562 iops : min= 256, max= 384, avg=360.42, stdev=29.87, samples=19 00:31:23.562 lat (msec) : 50=98.62%, 100=0.94%, 250=0.44% 00:31:23.562 cpu : usr=95.66%, sys=2.61%, ctx=237, majf=0, minf=48 00:31:23.562 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.562 filename0: (groupid=0, jobs=1): err= 0: pid=1791078: Wed Jul 24 19:23:28 2024 00:31:23.562 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10019msec) 00:31:23.562 slat (nsec): min=6644, max=70454, avg=26981.33, stdev=10218.03 00:31:23.562 clat (msec): min=20, max=131, avg=43.89, stdev= 6.20 00:31:23.562 lat (msec): min=20, max=131, avg=43.92, stdev= 6.20 00:31:23.562 clat percentiles (msec): 00:31:23.562 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.562 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.562 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 46], 00:31:23.562 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 132], 99.95th=[ 132], 00:31:23.562 | 99.99th=[ 132] 00:31:23.562 bw ( KiB/s): min= 1026, max= 1536, per=4.16%, avg=1446.50, stdev=117.82, samples=20 00:31:23.562 iops : min= 256, max= 384, avg=361.60, stdev=29.55, samples=20 00:31:23.562 lat (msec) : 50=98.62%, 100=0.94%, 250=0.44% 00:31:23.562 cpu : usr=92.74%, sys=3.71%, ctx=363, majf=0, minf=42 00:31:23.562 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.562 filename0: (groupid=0, jobs=1): err= 0: pid=1791079: Wed Jul 24 19:23:28 2024 00:31:23.562 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10019msec) 00:31:23.562 slat (nsec): min=8184, max=90615, avg=42362.18, stdev=13851.35 00:31:23.562 clat (usec): min=31393, max=92567, avg=43756.53, stdev=3740.23 00:31:23.562 lat (usec): min=31427, max=92584, avg=43798.90, stdev=3738.22 00:31:23.562 clat percentiles (usec): 00:31:23.562 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42730], 20.00th=[42730], 00:31:23.562 | 30.00th=[42730], 40.00th=[42730], 50.00th=[43254], 60.00th=[43254], 00:31:23.562 | 70.00th=[43254], 80.00th=[43779], 90.00th=[45351], 95.00th=[45876], 00:31:23.562 | 99.00th=[55313], 99.50th=[59507], 99.90th=[92799], 99.95th=[92799], 00:31:23.562 | 99.99th=[92799] 00:31:23.562 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.562 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.562 lat (msec) : 50=98.24%, 100=1.76% 00:31:23.562 cpu : usr=97.95%, sys=1.37%, ctx=90, majf=0, minf=36 00:31:23.562 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.562 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.562 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename0: (groupid=0, jobs=1): err= 0: pid=1791080: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10016msec) 00:31:23.563 slat (usec): min=8, max=151, avg=27.73, stdev=14.36 00:31:23.563 clat (msec): min=20, max=128, avg=43.88, stdev= 5.97 00:31:23.563 lat (msec): min=20, max=128, avg=43.90, stdev= 5.97 00:31:23.563 clat percentiles (msec): 00:31:23.563 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.563 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.563 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 46], 00:31:23.563 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 129], 99.95th=[ 129], 00:31:23.563 | 99.99th=[ 129] 00:31:23.563 bw ( KiB/s): min= 1152, max= 1536, per=4.14%, avg=1441.68, stdev=93.89, samples=19 00:31:23.563 iops : min= 288, max= 384, avg=360.42, stdev=23.47, samples=19 00:31:23.563 lat (msec) : 50=98.62%, 100=0.94%, 250=0.44% 00:31:23.563 cpu : usr=97.77%, sys=1.55%, ctx=54, majf=0, minf=53 00:31:23.563 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename0: (groupid=0, jobs=1): err= 0: pid=1791081: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10019msec) 00:31:23.563 slat (usec): min=13, max=102, avg=44.91, stdev=14.92 00:31:23.563 clat (msec): min=36, max=115, avg=43.74, stdev= 3.86 00:31:23.563 lat (msec): min=36, max=115, avg=43.79, stdev= 3.86 00:31:23.563 clat percentiles (msec): 00:31:23.563 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.563 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.563 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 46], 00:31:23.563 | 99.00th=[ 56], 99.50th=[ 60], 99.90th=[ 93], 99.95th=[ 115], 00:31:23.563 | 99.99th=[ 116] 00:31:23.563 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.563 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.563 lat (msec) : 50=98.29%, 100=1.65%, 250=0.06% 00:31:23.563 cpu : usr=97.83%, sys=1.58%, ctx=26, majf=0, minf=58 00:31:23.563 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename1: (groupid=0, jobs=1): err= 0: pid=1791082: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10018msec) 00:31:23.563 slat (usec): min=8, max=118, avg=42.76, stdev=28.46 00:31:23.563 clat (msec): min=21, max=103, avg=43.73, stdev= 4.33 00:31:23.563 lat (msec): min=21, max=103, avg=43.78, stdev= 4.33 00:31:23.563 clat percentiles (msec): 00:31:23.563 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.563 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.563 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 46], 00:31:23.563 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 104], 99.95th=[ 104], 00:31:23.563 | 99.99th=[ 104] 00:31:23.563 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.563 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.563 lat (msec) : 50=98.18%, 100=1.38%, 250=0.44% 00:31:23.563 cpu : usr=97.70%, sys=1.63%, ctx=55, majf=0, minf=65 00:31:23.563 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename1: (groupid=0, jobs=1): err= 0: pid=1791083: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=365, BW=1461KiB/s (1496kB/s)(14.3MiB/10033msec) 00:31:23.563 slat (usec): min=7, max=143, avg=34.51, stdev=27.85 00:31:23.563 clat (usec): min=19266, max=63249, avg=43510.14, stdev=2432.17 00:31:23.563 lat (usec): min=19274, max=63309, avg=43544.65, stdev=2434.64 00:31:23.563 clat percentiles (usec): 00:31:23.563 | 1.00th=[41157], 5.00th=[42730], 10.00th=[42730], 20.00th=[42730], 00:31:23.563 | 30.00th=[42730], 40.00th=[43254], 50.00th=[43254], 60.00th=[43254], 00:31:23.563 | 70.00th=[43254], 80.00th=[44303], 90.00th=[44827], 95.00th=[45351], 00:31:23.563 | 99.00th=[51119], 99.50th=[57934], 99.90th=[63177], 99.95th=[63177], 00:31:23.563 | 99.99th=[63177] 00:31:23.563 bw ( KiB/s): min= 1408, max= 1536, per=4.19%, avg=1459.20, stdev=64.34, samples=20 00:31:23.563 iops : min= 352, max= 384, avg=364.80, stdev=16.08, samples=20 00:31:23.563 lat (msec) : 20=0.38%, 50=98.31%, 100=1.31% 00:31:23.563 cpu : usr=96.55%, sys=2.43%, ctx=137, majf=0, minf=112 00:31:23.563 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 issued rwts: total=3664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename1: (groupid=0, jobs=1): err= 0: pid=1791084: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10019msec) 00:31:23.563 slat (usec): min=13, max=117, avg=48.94, stdev=17.95 00:31:23.563 clat (usec): min=29928, max=92707, avg=43702.40, stdev=3746.64 00:31:23.563 lat (usec): min=29945, max=92727, avg=43751.34, stdev=3745.13 00:31:23.563 clat percentiles (usec): 00:31:23.563 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42730], 20.00th=[42730], 00:31:23.563 | 30.00th=[42730], 40.00th=[42730], 50.00th=[43254], 60.00th=[43254], 00:31:23.563 | 70.00th=[43254], 80.00th=[43779], 90.00th=[44827], 95.00th=[45876], 00:31:23.563 | 99.00th=[55837], 99.50th=[59507], 99.90th=[92799], 99.95th=[92799], 00:31:23.563 | 99.99th=[92799] 00:31:23.563 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.563 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.563 lat (msec) : 50=98.13%, 100=1.87% 00:31:23.563 cpu : usr=96.02%, sys=2.53%, ctx=107, majf=0, minf=50 00:31:23.563 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename1: (groupid=0, jobs=1): err= 0: pid=1791085: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=363, BW=1454KiB/s (1489kB/s)(14.2MiB/10037msec) 00:31:23.563 slat (usec): min=9, max=135, avg=39.99, stdev=13.57 00:31:23.563 clat (usec): min=29567, max=62728, avg=43683.65, stdev=2208.32 00:31:23.563 lat (usec): min=29582, max=62745, avg=43723.64, stdev=2206.67 00:31:23.563 clat percentiles (usec): 00:31:23.563 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[42730], 00:31:23.563 | 30.00th=[42730], 40.00th=[42730], 50.00th=[43254], 60.00th=[43254], 00:31:23.563 | 70.00th=[43254], 80.00th=[43779], 90.00th=[45351], 95.00th=[45876], 00:31:23.563 | 99.00th=[55837], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:31:23.563 | 99.99th=[62653] 00:31:23.563 bw ( KiB/s): min= 1280, max= 1536, per=4.17%, avg=1452.80, stdev=75.15, samples=20 00:31:23.563 iops : min= 320, max= 384, avg=363.20, stdev=18.79, samples=20 00:31:23.563 lat (msec) : 50=98.19%, 100=1.81% 00:31:23.563 cpu : usr=97.20%, sys=1.78%, ctx=78, majf=0, minf=51 00:31:23.563 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 issued rwts: total=3648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename1: (groupid=0, jobs=1): err= 0: pid=1791086: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10018msec) 00:31:23.563 slat (nsec): min=11654, max=81883, avg=28640.39, stdev=10219.49 00:31:23.563 clat (msec): min=20, max=130, avg=43.87, stdev= 6.09 00:31:23.563 lat (msec): min=20, max=130, avg=43.90, stdev= 6.09 00:31:23.563 clat percentiles (msec): 00:31:23.563 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.563 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.563 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 46], 00:31:23.563 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 131], 99.95th=[ 131], 00:31:23.563 | 99.99th=[ 131] 00:31:23.563 bw ( KiB/s): min= 1152, max= 1536, per=4.14%, avg=1441.68, stdev=93.89, samples=19 00:31:23.563 iops : min= 288, max= 384, avg=360.42, stdev=23.47, samples=19 00:31:23.563 lat (msec) : 50=98.68%, 100=0.88%, 250=0.44% 00:31:23.563 cpu : usr=95.46%, sys=2.73%, ctx=399, majf=0, minf=59 00:31:23.563 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:23.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.563 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.563 filename1: (groupid=0, jobs=1): err= 0: pid=1791087: Wed Jul 24 19:23:28 2024 00:31:23.563 read: IOPS=362, BW=1451KiB/s (1485kB/s)(14.2MiB/10015msec) 00:31:23.563 slat (usec): min=11, max=138, avg=53.98, stdev=29.61 00:31:23.563 clat (msec): min=20, max=127, avg=43.62, stdev= 5.96 00:31:23.563 lat (msec): min=20, max=127, avg=43.68, stdev= 5.96 00:31:23.563 clat percentiles (msec): 00:31:23.563 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.563 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.563 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 46], 00:31:23.563 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 128], 99.95th=[ 128], 00:31:23.563 | 99.99th=[ 128] 00:31:23.563 bw ( KiB/s): min= 1152, max= 1536, per=4.14%, avg=1441.68, stdev=93.89, samples=19 00:31:23.563 iops : min= 288, max= 384, avg=360.42, stdev=23.47, samples=19 00:31:23.564 lat (msec) : 50=98.71%, 100=0.85%, 250=0.44% 00:31:23.564 cpu : usr=97.34%, sys=1.79%, ctx=53, majf=0, minf=53 00:31:23.564 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename1: (groupid=0, jobs=1): err= 0: pid=1791088: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=363, BW=1453KiB/s (1488kB/s)(14.2MiB/10039msec) 00:31:23.564 slat (usec): min=5, max=143, avg=32.05, stdev=20.39 00:31:23.564 clat (usec): min=29612, max=70449, avg=43779.50, stdev=2537.64 00:31:23.564 lat (usec): min=29624, max=70478, avg=43811.55, stdev=2537.31 00:31:23.564 clat percentiles (usec): 00:31:23.564 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[42730], 00:31:23.564 | 30.00th=[42730], 40.00th=[43254], 50.00th=[43254], 60.00th=[43254], 00:31:23.564 | 70.00th=[43779], 80.00th=[44303], 90.00th=[45351], 95.00th=[46400], 00:31:23.564 | 99.00th=[57410], 99.50th=[59507], 99.90th=[65274], 99.95th=[70779], 00:31:23.564 | 99.99th=[70779] 00:31:23.564 bw ( KiB/s): min= 1282, max= 1536, per=4.17%, avg=1452.10, stdev=72.77, samples=20 00:31:23.564 iops : min= 320, max= 384, avg=363.00, stdev=18.26, samples=20 00:31:23.564 lat (msec) : 50=97.97%, 100=2.03% 00:31:23.564 cpu : usr=97.21%, sys=2.02%, ctx=39, majf=0, minf=48 00:31:23.564 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename1: (groupid=0, jobs=1): err= 0: pid=1791089: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10019msec) 00:31:23.564 slat (usec): min=14, max=131, avg=62.91, stdev=23.36 00:31:23.564 clat (msec): min=30, max=115, avg=43.57, stdev= 3.92 00:31:23.564 lat (msec): min=30, max=115, avg=43.63, stdev= 3.92 00:31:23.564 clat percentiles (msec): 00:31:23.564 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.564 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:31:23.564 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 46], 00:31:23.564 | 99.00th=[ 56], 99.50th=[ 60], 99.90th=[ 93], 99.95th=[ 115], 00:31:23.564 | 99.99th=[ 116] 00:31:23.564 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.564 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.564 lat (msec) : 50=98.24%, 100=1.71%, 250=0.06% 00:31:23.564 cpu : usr=97.47%, sys=1.74%, ctx=47, majf=0, minf=62 00:31:23.564 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename2: (groupid=0, jobs=1): err= 0: pid=1791090: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=362, BW=1451KiB/s (1486kB/s)(14.2MiB/10011msec) 00:31:23.564 slat (usec): min=8, max=107, avg=26.82, stdev=19.33 00:31:23.564 clat (usec): min=41162, max=82431, avg=43858.11, stdev=3172.17 00:31:23.564 lat (usec): min=41233, max=82451, avg=43884.93, stdev=3172.92 00:31:23.564 clat percentiles (usec): 00:31:23.564 | 1.00th=[42206], 5.00th=[42730], 10.00th=[42730], 20.00th=[42730], 00:31:23.564 | 30.00th=[42730], 40.00th=[43254], 50.00th=[43254], 60.00th=[43254], 00:31:23.564 | 70.00th=[43779], 80.00th=[44303], 90.00th=[44827], 95.00th=[45876], 00:31:23.564 | 99.00th=[57934], 99.50th=[62653], 99.90th=[82314], 99.95th=[82314], 00:31:23.564 | 99.99th=[82314] 00:31:23.564 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.564 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.564 lat (msec) : 50=98.24%, 100=1.76% 00:31:23.564 cpu : usr=96.29%, sys=2.50%, ctx=109, majf=0, minf=63 00:31:23.564 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename2: (groupid=0, jobs=1): err= 0: pid=1791091: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=364, BW=1460KiB/s (1495kB/s)(14.3MiB/10041msec) 00:31:23.564 slat (usec): min=7, max=158, avg=31.90, stdev=26.81 00:31:23.564 clat (usec): min=20638, max=59722, avg=43563.91, stdev=2118.31 00:31:23.564 lat (usec): min=20684, max=59759, avg=43595.81, stdev=2116.70 00:31:23.564 clat percentiles (usec): 00:31:23.564 | 1.00th=[41681], 5.00th=[42206], 10.00th=[42730], 20.00th=[42730], 00:31:23.564 | 30.00th=[42730], 40.00th=[43254], 50.00th=[43254], 60.00th=[43254], 00:31:23.564 | 70.00th=[43779], 80.00th=[44303], 90.00th=[45351], 95.00th=[45876], 00:31:23.564 | 99.00th=[53216], 99.50th=[55837], 99.90th=[59507], 99.95th=[59507], 00:31:23.564 | 99.99th=[59507] 00:31:23.564 bw ( KiB/s): min= 1408, max= 1536, per=4.19%, avg=1459.20, stdev=64.34, samples=20 00:31:23.564 iops : min= 352, max= 384, avg=364.80, stdev=16.08, samples=20 00:31:23.564 lat (msec) : 50=98.69%, 100=1.31% 00:31:23.564 cpu : usr=97.96%, sys=1.46%, ctx=35, majf=0, minf=61 00:31:23.564 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename2: (groupid=0, jobs=1): err= 0: pid=1791092: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10017msec) 00:31:23.564 slat (nsec): min=8255, max=67333, avg=25383.40, stdev=10935.86 00:31:23.564 clat (msec): min=20, max=129, avg=43.90, stdev= 6.06 00:31:23.564 lat (msec): min=20, max=129, avg=43.93, stdev= 6.06 00:31:23.564 clat percentiles (msec): 00:31:23.564 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.564 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.564 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 46], 95.00th=[ 46], 00:31:23.564 | 99.00th=[ 52], 99.50th=[ 58], 99.90th=[ 130], 99.95th=[ 130], 00:31:23.564 | 99.99th=[ 130] 00:31:23.564 bw ( KiB/s): min= 1152, max= 1536, per=4.14%, avg=1441.68, stdev=93.89, samples=19 00:31:23.564 iops : min= 288, max= 384, avg=360.42, stdev=23.47, samples=19 00:31:23.564 lat (msec) : 50=98.51%, 100=1.05%, 250=0.44% 00:31:23.564 cpu : usr=94.98%, sys=2.82%, ctx=145, majf=0, minf=50 00:31:23.564 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename2: (groupid=0, jobs=1): err= 0: pid=1791093: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10017msec) 00:31:23.564 slat (usec): min=8, max=121, avg=37.40, stdev=23.83 00:31:23.564 clat (msec): min=39, max=103, avg=43.79, stdev= 4.25 00:31:23.564 lat (msec): min=39, max=103, avg=43.82, stdev= 4.25 00:31:23.564 clat percentiles (msec): 00:31:23.564 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.564 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.564 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 46], 00:31:23.564 | 99.00th=[ 53], 99.50th=[ 58], 99.90th=[ 104], 99.95th=[ 104], 00:31:23.564 | 99.99th=[ 104] 00:31:23.564 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.564 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.564 lat (msec) : 50=98.24%, 100=1.32%, 250=0.44% 00:31:23.564 cpu : usr=98.11%, sys=1.20%, ctx=80, majf=0, minf=48 00:31:23.564 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename2: (groupid=0, jobs=1): err= 0: pid=1791094: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=362, BW=1450KiB/s (1485kB/s)(14.2MiB/10019msec) 00:31:23.564 slat (nsec): min=9153, max=90350, avg=41163.38, stdev=11457.26 00:31:23.564 clat (usec): min=29625, max=92650, avg=43773.56, stdev=3736.55 00:31:23.564 lat (usec): min=29635, max=92683, avg=43814.73, stdev=3735.53 00:31:23.564 clat percentiles (usec): 00:31:23.564 | 1.00th=[42206], 5.00th=[42206], 10.00th=[42730], 20.00th=[42730], 00:31:23.564 | 30.00th=[42730], 40.00th=[42730], 50.00th=[43254], 60.00th=[43254], 00:31:23.564 | 70.00th=[43254], 80.00th=[43779], 90.00th=[44827], 95.00th=[45876], 00:31:23.564 | 99.00th=[55837], 99.50th=[59507], 99.90th=[92799], 99.95th=[92799], 00:31:23.564 | 99.99th=[92799] 00:31:23.564 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.564 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.564 lat (msec) : 50=98.13%, 100=1.87% 00:31:23.564 cpu : usr=95.97%, sys=2.48%, ctx=141, majf=0, minf=40 00:31:23.564 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:23.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.564 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.564 filename2: (groupid=0, jobs=1): err= 0: pid=1791095: Wed Jul 24 19:23:28 2024 00:31:23.564 read: IOPS=365, BW=1463KiB/s (1498kB/s)(14.3MiB/10015msec) 00:31:23.564 slat (usec): min=10, max=110, avg=36.88, stdev=17.47 00:31:23.564 clat (msec): min=14, max=134, avg=43.42, stdev= 6.99 00:31:23.564 lat (msec): min=14, max=134, avg=43.46, stdev= 7.00 00:31:23.564 clat percentiles (msec): 00:31:23.564 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.565 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.565 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 46], 00:31:23.565 | 99.00th=[ 56], 99.50th=[ 68], 99.90th=[ 136], 99.95th=[ 136], 00:31:23.565 | 99.99th=[ 136] 00:31:23.565 bw ( KiB/s): min= 1024, max= 1680, per=4.19%, avg=1458.53, stdev=131.54, samples=19 00:31:23.565 iops : min= 256, max= 420, avg=364.63, stdev=32.88, samples=19 00:31:23.565 lat (msec) : 20=0.11%, 50=97.71%, 100=1.75%, 250=0.44% 00:31:23.565 cpu : usr=97.96%, sys=1.55%, ctx=22, majf=0, minf=54 00:31:23.565 IO depths : 1=4.0%, 2=8.5%, 4=18.7%, 8=59.0%, 16=9.8%, 32=0.0%, >=64=0.0% 00:31:23.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.565 complete : 0=0.0%, 4=92.7%, 8=2.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.565 issued rwts: total=3662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.565 filename2: (groupid=0, jobs=1): err= 0: pid=1791096: Wed Jul 24 19:23:28 2024 00:31:23.565 read: IOPS=362, BW=1448KiB/s (1483kB/s)(14.2MiB/10030msec) 00:31:23.565 slat (usec): min=8, max=108, avg=29.42, stdev=20.39 00:31:23.565 clat (msec): min=39, max=115, avg=43.92, stdev= 5.01 00:31:23.565 lat (msec): min=39, max=115, avg=43.95, stdev= 5.01 00:31:23.565 clat percentiles (msec): 00:31:23.565 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.565 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.565 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 46], 00:31:23.565 | 99.00th=[ 53], 99.50th=[ 58], 99.90th=[ 116], 99.95th=[ 116], 00:31:23.565 | 99.99th=[ 116] 00:31:23.565 bw ( KiB/s): min= 1152, max= 1536, per=4.16%, avg=1446.40, stdev=93.78, samples=20 00:31:23.565 iops : min= 288, max= 384, avg=361.60, stdev=23.45, samples=20 00:31:23.565 lat (msec) : 50=98.24%, 100=1.32%, 250=0.44% 00:31:23.565 cpu : usr=97.42%, sys=1.74%, ctx=69, majf=0, minf=116 00:31:23.565 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:23.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.565 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.565 issued rwts: total=3632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.565 filename2: (groupid=0, jobs=1): err= 0: pid=1791097: Wed Jul 24 19:23:28 2024 00:31:23.565 read: IOPS=361, BW=1446KiB/s (1481kB/s)(14.1MiB/10001msec) 00:31:23.565 slat (usec): min=12, max=131, avg=48.96, stdev=20.82 00:31:23.565 clat (msec): min=29, max=134, avg=43.80, stdev= 6.31 00:31:23.565 lat (msec): min=29, max=134, avg=43.85, stdev= 6.31 00:31:23.565 clat percentiles (msec): 00:31:23.565 | 1.00th=[ 43], 5.00th=[ 43], 10.00th=[ 43], 20.00th=[ 43], 00:31:23.565 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 44], 00:31:23.565 | 70.00th=[ 44], 80.00th=[ 44], 90.00th=[ 45], 95.00th=[ 46], 00:31:23.565 | 99.00th=[ 54], 99.50th=[ 56], 99.90th=[ 136], 99.95th=[ 136], 00:31:23.565 | 99.99th=[ 136] 00:31:23.565 bw ( KiB/s): min= 1024, max= 1552, per=4.14%, avg=1441.68, stdev=120.55, samples=19 00:31:23.565 iops : min= 256, max= 388, avg=360.42, stdev=30.14, samples=19 00:31:23.565 lat (msec) : 50=98.56%, 100=1.00%, 250=0.44% 00:31:23.565 cpu : usr=97.79%, sys=1.63%, ctx=46, majf=0, minf=58 00:31:23.565 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:23.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.565 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:23.565 issued rwts: total=3616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:23.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:23.565 00:31:23.565 Run status group 0 (all jobs): 00:31:23.565 READ: bw=34.0MiB/s (35.6MB/s), 1446KiB/s-1463KiB/s (1481kB/s-1498kB/s), io=341MiB (358MB), run=10001-10041msec 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 bdev_null0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 [2024-07-24 19:23:28.447387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 bdev_null1 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.565 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.566 { 00:31:23.566 "params": { 00:31:23.566 "name": "Nvme$subsystem", 00:31:23.566 "trtype": "$TEST_TRANSPORT", 00:31:23.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.566 "adrfam": "ipv4", 00:31:23.566 "trsvcid": "$NVMF_PORT", 00:31:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.566 "hdgst": ${hdgst:-false}, 00:31:23.566 "ddgst": ${ddgst:-false} 00:31:23.566 }, 00:31:23.566 "method": "bdev_nvme_attach_controller" 00:31:23.566 } 00:31:23.566 EOF 00:31:23.566 )") 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:23.566 { 00:31:23.566 "params": { 00:31:23.566 "name": "Nvme$subsystem", 00:31:23.566 "trtype": "$TEST_TRANSPORT", 00:31:23.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.566 "adrfam": "ipv4", 00:31:23.566 "trsvcid": "$NVMF_PORT", 00:31:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.566 "hdgst": ${hdgst:-false}, 00:31:23.566 "ddgst": ${ddgst:-false} 00:31:23.566 }, 00:31:23.566 "method": "bdev_nvme_attach_controller" 00:31:23.566 } 00:31:23.566 EOF 00:31:23.566 )") 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:23.566 "params": { 00:31:23.566 "name": "Nvme0", 00:31:23.566 "trtype": "tcp", 00:31:23.566 "traddr": "10.0.0.2", 00:31:23.566 "adrfam": "ipv4", 00:31:23.566 "trsvcid": "4420", 00:31:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:23.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:23.566 "hdgst": false, 00:31:23.566 "ddgst": false 00:31:23.566 }, 00:31:23.566 "method": "bdev_nvme_attach_controller" 00:31:23.566 },{ 00:31:23.566 "params": { 00:31:23.566 "name": "Nvme1", 00:31:23.566 "trtype": "tcp", 00:31:23.566 "traddr": "10.0.0.2", 00:31:23.566 "adrfam": "ipv4", 00:31:23.566 "trsvcid": "4420", 00:31:23.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.566 "hdgst": false, 00:31:23.566 "ddgst": false 00:31:23.566 }, 00:31:23.566 "method": "bdev_nvme_attach_controller" 00:31:23.566 }' 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:23.566 19:23:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:23.566 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:23.566 ... 00:31:23.566 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:23.566 ... 00:31:23.566 fio-3.35 00:31:23.566 Starting 4 threads 00:31:23.566 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.129 00:31:30.129 filename0: (groupid=0, jobs=1): err= 0: pid=1792481: Wed Jul 24 19:23:34 2024 00:31:30.129 read: IOPS=1219, BW=9754KiB/s (9988kB/s)(47.7MiB/5008msec) 00:31:30.129 slat (usec): min=4, max=100, avg=27.04, stdev=13.12 00:31:30.129 clat (usec): min=1402, max=15900, avg=6467.06, stdev=1493.47 00:31:30.129 lat (usec): min=1424, max=15924, avg=6494.10, stdev=1497.02 00:31:30.129 clat percentiles (usec): 00:31:30.129 | 1.00th=[ 4359], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5538], 00:31:30.129 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5800], 00:31:30.129 | 70.00th=[ 5997], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9110], 00:31:30.129 | 99.00th=[ 9765], 99.50th=[10421], 99.90th=[13173], 99.95th=[14222], 00:31:30.129 | 99.99th=[15926] 00:31:30.129 bw ( KiB/s): min= 7168, max=11408, per=25.12%, avg=9756.80, stdev=1755.94, samples=10 00:31:30.129 iops : min= 896, max= 1426, avg=1219.60, stdev=219.49, samples=10 00:31:30.129 lat (msec) : 2=0.08%, 4=0.57%, 10=98.66%, 20=0.69% 00:31:30.129 cpu : usr=93.99%, sys=5.19%, ctx=16, majf=0, minf=24 00:31:30.129 IO depths : 1=0.4%, 2=14.2%, 4=59.1%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 issued rwts: total=6106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.129 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.129 filename0: (groupid=0, jobs=1): err= 0: pid=1792482: Wed Jul 24 19:23:34 2024 00:31:30.129 read: IOPS=1210, BW=9680KiB/s (9913kB/s)(47.3MiB/5004msec) 00:31:30.129 slat (usec): min=5, max=101, avg=28.79, stdev=13.57 00:31:30.129 clat (usec): min=997, max=28117, avg=6497.33, stdev=1752.81 00:31:30.129 lat (usec): min=1016, max=28134, avg=6526.11, stdev=1753.51 00:31:30.129 clat percentiles (usec): 00:31:30.129 | 1.00th=[ 2671], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5538], 00:31:30.129 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:31:30.129 | 70.00th=[ 6194], 80.00th=[ 8717], 90.00th=[ 8848], 95.00th=[ 8979], 00:31:30.129 | 99.00th=[10421], 99.50th=[14484], 99.90th=[20317], 99.95th=[20317], 00:31:30.129 | 99.99th=[28181] 00:31:30.129 bw ( KiB/s): min= 7152, max=11312, per=24.93%, avg=9680.00, stdev=1758.69, samples=10 00:31:30.129 iops : min= 894, max= 1414, avg=1210.00, stdev=219.84, samples=10 00:31:30.129 lat (usec) : 1000=0.02% 00:31:30.129 lat (msec) : 2=0.55%, 4=1.26%, 10=96.94%, 20=1.11%, 50=0.13% 00:31:30.129 cpu : usr=92.30%, sys=5.56%, ctx=166, majf=0, minf=45 00:31:30.129 IO depths : 1=0.4%, 2=21.5%, 4=52.6%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 complete : 0=0.0%, 4=90.8%, 8=9.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 issued rwts: total=6055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.129 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.129 filename1: (groupid=0, jobs=1): err= 0: pid=1792483: Wed Jul 24 19:23:34 2024 00:31:30.129 read: IOPS=1214, BW=9715KiB/s (9948kB/s)(47.5MiB/5006msec) 00:31:30.129 slat (usec): min=5, max=101, avg=26.12, stdev=13.43 00:31:30.129 clat (usec): min=1256, max=16062, avg=6484.97, stdev=1532.12 00:31:30.129 lat (usec): min=1285, max=16071, avg=6511.09, stdev=1531.87 00:31:30.129 clat percentiles (usec): 00:31:30.129 | 1.00th=[ 4228], 5.00th=[ 5342], 10.00th=[ 5407], 20.00th=[ 5538], 00:31:30.129 | 30.00th=[ 5604], 40.00th=[ 5669], 50.00th=[ 5669], 60.00th=[ 5800], 00:31:30.129 | 70.00th=[ 6063], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 8979], 00:31:30.129 | 99.00th=[ 9896], 99.50th=[10552], 99.90th=[13698], 99.95th=[14746], 00:31:30.129 | 99.99th=[16057] 00:31:30.129 bw ( KiB/s): min= 7168, max=11392, per=25.03%, avg=9718.60, stdev=1761.82, samples=10 00:31:30.129 iops : min= 896, max= 1424, avg=1214.80, stdev=220.23, samples=10 00:31:30.129 lat (msec) : 2=0.13%, 4=0.77%, 10=98.37%, 20=0.72% 00:31:30.129 cpu : usr=92.79%, sys=5.05%, ctx=132, majf=0, minf=34 00:31:30.129 IO depths : 1=0.7%, 2=21.6%, 4=52.5%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 issued rwts: total=6079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.129 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.129 filename1: (groupid=0, jobs=1): err= 0: pid=1792484: Wed Jul 24 19:23:34 2024 00:31:30.129 read: IOPS=1213, BW=9705KiB/s (9937kB/s)(47.4MiB/5003msec) 00:31:30.129 slat (usec): min=4, max=112, avg=28.44, stdev=13.11 00:31:30.129 clat (usec): min=1173, max=20998, avg=6483.40, stdev=1634.97 00:31:30.129 lat (usec): min=1206, max=21012, avg=6511.84, stdev=1635.10 00:31:30.129 clat percentiles (usec): 00:31:30.129 | 1.00th=[ 3523], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5473], 00:31:30.129 | 30.00th=[ 5538], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:31:30.129 | 70.00th=[ 6063], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 8979], 00:31:30.129 | 99.00th=[ 9634], 99.50th=[13304], 99.90th=[19006], 99.95th=[19006], 00:31:30.129 | 99.99th=[21103] 00:31:30.129 bw ( KiB/s): min= 7280, max=11264, per=24.98%, avg=9699.40, stdev=1693.69, samples=10 00:31:30.129 iops : min= 910, max= 1408, avg=1212.40, stdev=211.72, samples=10 00:31:30.129 lat (msec) : 2=0.13%, 4=1.05%, 10=97.99%, 20=0.81%, 50=0.02% 00:31:30.129 cpu : usr=94.56%, sys=4.30%, ctx=90, majf=0, minf=70 00:31:30.129 IO depths : 1=0.8%, 2=22.0%, 4=52.1%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.129 issued rwts: total=6069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.129 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.129 00:31:30.129 Run status group 0 (all jobs): 00:31:30.129 READ: bw=37.9MiB/s (39.8MB/s), 9680KiB/s-9754KiB/s (9913kB/s-9988kB/s), io=190MiB (199MB), run=5003-5008msec 00:31:30.129 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:30.129 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 00:31:30.130 real 0m25.235s 00:31:30.130 user 4m31.644s 00:31:30.130 sys 0m7.947s 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 ************************************ 00:31:30.130 END TEST fio_dif_rand_params 00:31:30.130 ************************************ 00:31:30.130 19:23:35 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:30.130 19:23:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:30.130 19:23:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 ************************************ 00:31:30.130 START TEST fio_dif_digest 00:31:30.130 ************************************ 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 bdev_null0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.130 [2024-07-24 19:23:35.197512] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.130 { 00:31:30.130 "params": { 00:31:30.130 "name": "Nvme$subsystem", 00:31:30.130 "trtype": "$TEST_TRANSPORT", 00:31:30.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.130 "adrfam": "ipv4", 00:31:30.130 "trsvcid": "$NVMF_PORT", 00:31:30.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.130 "hdgst": ${hdgst:-false}, 00:31:30.130 "ddgst": ${ddgst:-false} 00:31:30.130 }, 00:31:30.130 "method": "bdev_nvme_attach_controller" 00:31:30.130 } 00:31:30.130 EOF 00:31:30.130 )") 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:30.130 "params": { 00:31:30.130 "name": "Nvme0", 00:31:30.130 "trtype": "tcp", 00:31:30.130 "traddr": "10.0.0.2", 00:31:30.130 "adrfam": "ipv4", 00:31:30.130 "trsvcid": "4420", 00:31:30.130 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.130 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.130 "hdgst": true, 00:31:30.130 "ddgst": true 00:31:30.130 }, 00:31:30.130 "method": "bdev_nvme_attach_controller" 00:31:30.130 }' 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.130 19:23:35 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.130 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:30.130 ... 00:31:30.130 fio-3.35 00:31:30.130 Starting 3 threads 00:31:30.130 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.355 00:31:42.355 filename0: (groupid=0, jobs=1): err= 0: pid=1793233: Wed Jul 24 19:23:46 2024 00:31:42.355 read: IOPS=153, BW=19.1MiB/s (20.1MB/s)(192MiB/10049msec) 00:31:42.355 slat (nsec): min=5789, max=35931, avg=18489.88, stdev=1892.57 00:31:42.355 clat (usec): min=11289, max=53628, avg=19537.75, stdev=2451.22 00:31:42.355 lat (usec): min=11307, max=53647, avg=19556.24, stdev=2451.29 00:31:42.355 clat percentiles (usec): 00:31:42.355 | 1.00th=[13042], 5.00th=[13960], 10.00th=[16581], 20.00th=[18744], 00:31:42.355 | 30.00th=[19268], 40.00th=[19530], 50.00th=[19792], 60.00th=[20055], 00:31:42.355 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21627], 95.00th=[22152], 00:31:42.355 | 99.00th=[23462], 99.50th=[23725], 99.90th=[49021], 99.95th=[53740], 00:31:42.355 | 99.99th=[53740] 00:31:42.355 bw ( KiB/s): min=18432, max=22528, per=33.34%, avg=19660.80, stdev=861.56, samples=20 00:31:42.355 iops : min= 144, max= 176, avg=153.60, stdev= 6.73, samples=20 00:31:42.355 lat (msec) : 20=56.21%, 50=43.73%, 100=0.06% 00:31:42.355 cpu : usr=92.17%, sys=7.28%, ctx=26, majf=0, minf=136 00:31:42.355 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 issued rwts: total=1539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:42.355 filename0: (groupid=0, jobs=1): err= 0: pid=1793234: Wed Jul 24 19:23:46 2024 00:31:42.355 read: IOPS=152, BW=19.1MiB/s (20.0MB/s)(192MiB/10048msec) 00:31:42.355 slat (nsec): min=5735, max=48896, avg=19461.40, stdev=2349.54 00:31:42.355 clat (usec): min=11013, max=63228, avg=19595.71, stdev=6511.40 00:31:42.355 lat (usec): min=11032, max=63248, avg=19615.17, stdev=6511.42 00:31:42.355 clat percentiles (usec): 00:31:42.355 | 1.00th=[12649], 5.00th=[15926], 10.00th=[16909], 20.00th=[17695], 00:31:42.355 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19006], 00:31:42.355 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20579], 95.00th=[21365], 00:31:42.355 | 99.00th=[59507], 99.50th=[60556], 99.90th=[63177], 99.95th=[63177], 00:31:42.355 | 99.99th=[63177] 00:31:42.355 bw ( KiB/s): min=15104, max=21760, per=33.25%, avg=19609.60, stdev=1463.32, samples=20 00:31:42.355 iops : min= 118, max= 170, avg=153.20, stdev=11.43, samples=20 00:31:42.355 lat (msec) : 20=83.44%, 50=14.08%, 100=2.48% 00:31:42.355 cpu : usr=92.11%, sys=7.31%, ctx=15, majf=0, minf=103 00:31:42.355 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 issued rwts: total=1534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:42.355 filename0: (groupid=0, jobs=1): err= 0: pid=1793235: Wed Jul 24 19:23:46 2024 00:31:42.355 read: IOPS=154, BW=19.4MiB/s (20.3MB/s)(195MiB/10048msec) 00:31:42.355 slat (usec): min=5, max=113, avg=19.58, stdev= 3.24 00:31:42.355 clat (usec): min=10773, max=61085, avg=19305.48, stdev=4644.57 00:31:42.355 lat (usec): min=10791, max=61104, avg=19325.06, stdev=4644.56 00:31:42.355 clat percentiles (usec): 00:31:42.355 | 1.00th=[12125], 5.00th=[13698], 10.00th=[16909], 20.00th=[17957], 00:31:42.355 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19268], 60.00th=[19530], 00:31:42.355 | 70.00th=[19792], 80.00th=[20317], 90.00th=[21103], 95.00th=[21890], 00:31:42.355 | 99.00th=[56361], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:31:42.355 | 99.99th=[61080] 00:31:42.355 bw ( KiB/s): min=17920, max=21504, per=33.75%, avg=19905.85, stdev=959.10, samples=20 00:31:42.355 iops : min= 140, max= 168, avg=155.50, stdev= 7.51, samples=20 00:31:42.355 lat (msec) : 20=73.80%, 50=25.11%, 100=1.09% 00:31:42.355 cpu : usr=92.57%, sys=6.84%, ctx=21, majf=0, minf=158 00:31:42.355 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.355 issued rwts: total=1557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.355 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:42.355 00:31:42.355 Run status group 0 (all jobs): 00:31:42.355 READ: bw=57.6MiB/s (60.4MB/s), 19.1MiB/s-19.4MiB/s (20.0MB/s-20.3MB/s), io=579MiB (607MB), run=10048-10049msec 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:42.355 00:31:42.355 real 0m11.436s 00:31:42.355 user 0m29.143s 00:31:42.355 sys 0m2.559s 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:42.355 19:23:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:42.355 ************************************ 00:31:42.355 END TEST fio_dif_digest 00:31:42.355 ************************************ 00:31:42.355 19:23:46 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:42.355 19:23:46 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:42.355 rmmod nvme_tcp 00:31:42.355 rmmod nvme_fabrics 00:31:42.355 rmmod nvme_keyring 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1787170 ']' 00:31:42.355 19:23:46 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1787170 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1787170 ']' 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1787170 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1787170 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1787170' 00:31:42.355 killing process with pid 1787170 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1787170 00:31:42.355 19:23:46 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1787170 00:31:42.355 19:23:47 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:42.355 19:23:47 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:43.290 Waiting for block devices as requested 00:31:43.290 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:43.290 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:43.549 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:43.549 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:43.808 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:43.808 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:43.808 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:43.808 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:44.067 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:44.067 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:44.067 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:44.325 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:44.325 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:44.325 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:44.325 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:44.583 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:44.583 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:44.583 19:23:50 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.583 19:23:50 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.583 19:23:50 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.584 19:23:50 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.584 19:23:50 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.584 19:23:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:44.584 19:23:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.115 19:23:52 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:47.115 00:31:47.115 real 1m11.945s 00:31:47.115 user 6m31.777s 00:31:47.115 sys 0m22.440s 00:31:47.115 19:23:52 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.115 19:23:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:47.115 ************************************ 00:31:47.115 END TEST nvmf_dif 00:31:47.115 ************************************ 00:31:47.115 19:23:52 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:47.115 19:23:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:47.115 19:23:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:47.115 19:23:52 -- common/autotest_common.sh@10 -- # set +x 00:31:47.115 ************************************ 00:31:47.115 START TEST nvmf_abort_qd_sizes 00:31:47.115 ************************************ 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:47.115 * Looking for test storage... 00:31:47.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.115 19:23:52 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:47.116 19:23:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:31:49.646 Found 0000:84:00.0 (0x8086 - 0x159b) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:31:49.646 Found 0000:84:00.1 (0x8086 - 0x159b) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:31:49.646 Found net devices under 0000:84:00.0: cvl_0_0 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:49.646 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:31:49.647 Found net devices under 0000:84:00.1: cvl_0_1 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.647 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:49.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:31:49.905 00:31:49.905 --- 10.0.0.2 ping statistics --- 00:31:49.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.905 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:31:49.905 00:31:49.905 --- 10.0.0.1 ping statistics --- 00:31:49.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.905 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:49.905 19:23:55 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:51.807 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:51.807 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:51.807 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:51.807 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:51.807 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:51.807 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:51.807 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:51.807 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:51.807 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:52.742 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1798311 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1798311 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1798311 ']' 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:52.742 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:52.742 [2024-07-24 19:23:58.407394] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:31:52.742 [2024-07-24 19:23:58.407594] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.001 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.001 [2024-07-24 19:23:58.554765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.259 [2024-07-24 19:23:58.755390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.259 [2024-07-24 19:23:58.755511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.259 [2024-07-24 19:23:58.755531] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.259 [2024-07-24 19:23:58.755546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.259 [2024-07-24 19:23:58.755559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.259 [2024-07-24 19:23:58.755624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.259 [2024-07-24 19:23:58.755684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.259 [2024-07-24 19:23:58.755712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.259 [2024-07-24 19:23:58.755720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.259 19:23:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:53.259 ************************************ 00:31:53.259 START TEST spdk_target_abort 00:31:53.259 ************************************ 00:31:53.517 19:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:53.517 19:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:53.517 19:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:31:53.517 19:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:53.517 19:23:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:56.796 spdk_targetn1 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:56.796 [2024-07-24 19:24:01.805846] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:56.796 [2024-07-24 19:24:01.840350] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:56.796 19:24:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:56.796 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.136 Initializing NVMe Controllers 00:32:00.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:00.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:00.136 Initialization complete. Launching workers. 00:32:00.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8379, failed: 0 00:32:00.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1274, failed to submit 7105 00:32:00.136 success 751, unsuccess 523, failed 0 00:32:00.136 19:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:00.136 19:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:00.136 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.443 Initializing NVMe Controllers 00:32:03.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.443 Initialization complete. Launching workers. 00:32:03.443 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8478, failed: 0 00:32:03.443 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1275, failed to submit 7203 00:32:03.443 success 294, unsuccess 981, failed 0 00:32:03.443 19:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:03.443 19:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:03.443 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.973 Initializing NVMe Controllers 00:32:05.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:05.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:05.973 Initialization complete. Launching workers. 00:32:05.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27545, failed: 0 00:32:05.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2711, failed to submit 24834 00:32:05.973 success 249, unsuccess 2462, failed 0 00:32:05.973 19:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:05.973 19:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.973 19:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.231 19:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:06.231 19:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:06.231 19:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:06.231 19:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1798311 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1798311 ']' 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1798311 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1798311 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1798311' 00:32:07.605 killing process with pid 1798311 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1798311 00:32:07.605 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1798311 00:32:07.863 00:32:07.863 real 0m14.504s 00:32:07.863 user 0m54.736s 00:32:07.863 sys 0m2.840s 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:07.863 ************************************ 00:32:07.863 END TEST spdk_target_abort 00:32:07.863 ************************************ 00:32:07.863 19:24:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:07.863 19:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:07.863 19:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:07.863 19:24:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:07.863 ************************************ 00:32:07.863 START TEST kernel_target_abort 00:32:07.863 ************************************ 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:07.863 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:07.864 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:07.864 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:07.864 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:07.864 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:07.864 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:07.864 19:24:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:09.767 Waiting for block devices as requested 00:32:09.767 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:09.767 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:09.767 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:10.027 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:10.027 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:10.027 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:10.286 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:10.286 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:10.286 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:10.286 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:10.545 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:10.545 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:10.545 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:10.545 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:10.805 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:10.805 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:10.805 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:11.065 No valid GPT data, bailing 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:11.065 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:32:11.324 00:32:11.324 Discovery Log Number of Records 2, Generation counter 2 00:32:11.324 =====Discovery Log Entry 0====== 00:32:11.324 trtype: tcp 00:32:11.324 adrfam: ipv4 00:32:11.324 subtype: current discovery subsystem 00:32:11.324 treq: not specified, sq flow control disable supported 00:32:11.324 portid: 1 00:32:11.324 trsvcid: 4420 00:32:11.324 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:11.324 traddr: 10.0.0.1 00:32:11.324 eflags: none 00:32:11.324 sectype: none 00:32:11.324 =====Discovery Log Entry 1====== 00:32:11.324 trtype: tcp 00:32:11.324 adrfam: ipv4 00:32:11.324 subtype: nvme subsystem 00:32:11.324 treq: not specified, sq flow control disable supported 00:32:11.324 portid: 1 00:32:11.324 trsvcid: 4420 00:32:11.324 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:11.324 traddr: 10.0.0.1 00:32:11.324 eflags: none 00:32:11.324 sectype: none 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.324 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:11.325 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.325 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.325 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:11.325 19:24:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.325 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.611 Initializing NVMe Controllers 00:32:14.611 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:14.611 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:14.611 Initialization complete. Launching workers. 00:32:14.611 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 19237, failed: 0 00:32:14.611 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19237, failed to submit 0 00:32:14.611 success 0, unsuccess 19237, failed 0 00:32:14.611 19:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:14.611 19:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:14.611 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.898 Initializing NVMe Controllers 00:32:17.898 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:17.898 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:17.898 Initialization complete. Launching workers. 00:32:17.898 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35495, failed: 0 00:32:17.898 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 8954, failed to submit 26541 00:32:17.898 success 0, unsuccess 8954, failed 0 00:32:17.898 19:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.898 19:24:23 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:17.898 EAL: No free 2048 kB hugepages reported on node 1 00:32:21.182 Initializing NVMe Controllers 00:32:21.182 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:21.182 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:21.182 Initialization complete. Launching workers. 00:32:21.182 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33958, failed: 0 00:32:21.182 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 8486, failed to submit 25472 00:32:21.182 success 0, unsuccess 8486, failed 0 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:21.182 19:24:26 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:22.592 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:22.592 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:22.592 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:22.592 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:22.592 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:22.592 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:22.592 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:22.592 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:22.592 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:23.529 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:32:23.529 00:32:23.529 real 0m15.699s 00:32:23.529 user 0m6.273s 00:32:23.529 sys 0m4.243s 00:32:23.529 19:24:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:23.529 19:24:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:23.529 ************************************ 00:32:23.529 END TEST kernel_target_abort 00:32:23.529 ************************************ 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:23.787 rmmod nvme_tcp 00:32:23.787 rmmod nvme_fabrics 00:32:23.787 rmmod nvme_keyring 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1798311 ']' 00:32:23.787 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1798311 00:32:23.788 19:24:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1798311 ']' 00:32:23.788 19:24:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1798311 00:32:23.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1798311) - No such process 00:32:23.788 19:24:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1798311 is not found' 00:32:23.788 Process with pid 1798311 is not found 00:32:23.788 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:23.788 19:24:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:25.163 Waiting for block devices as requested 00:32:25.422 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:32:25.422 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:25.681 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:25.681 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:25.681 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:25.939 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:25.940 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:25.940 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:25.940 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:26.197 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:26.197 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:26.197 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:26.197 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:26.456 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:26.456 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:26.456 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:26.714 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:26.714 19:24:32 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:26.714 19:24:32 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:26.714 19:24:32 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:26.714 19:24:32 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:26.714 19:24:32 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.714 19:24:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:26.715 19:24:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.248 19:24:34 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:29.248 00:32:29.248 real 0m42.015s 00:32:29.248 user 1m3.880s 00:32:29.248 sys 0m12.083s 00:32:29.248 19:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:29.248 19:24:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:29.248 ************************************ 00:32:29.248 END TEST nvmf_abort_qd_sizes 00:32:29.248 ************************************ 00:32:29.248 19:24:34 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:29.248 19:24:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:29.248 19:24:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:29.248 19:24:34 -- common/autotest_common.sh@10 -- # set +x 00:32:29.248 ************************************ 00:32:29.248 START TEST keyring_file 00:32:29.248 ************************************ 00:32:29.248 19:24:34 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:29.248 * Looking for test storage... 00:32:29.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:29.248 19:24:34 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:29.248 19:24:34 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.248 19:24:34 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.249 19:24:34 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.249 19:24:34 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.249 19:24:34 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.249 19:24:34 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.249 19:24:34 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.249 19:24:34 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.249 19:24:34 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:29.249 19:24:34 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HFdj6pBw0q 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HFdj6pBw0q 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HFdj6pBw0q 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HFdj6pBw0q 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FdVlyARWW8 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:29.249 19:24:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FdVlyARWW8 00:32:29.249 19:24:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FdVlyARWW8 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FdVlyARWW8 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=1804829 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:29.249 19:24:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1804829 00:32:29.249 19:24:34 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1804829 ']' 00:32:29.249 19:24:34 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.249 19:24:34 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:29.249 19:24:34 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.249 19:24:34 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:29.249 19:24:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:29.249 [2024-07-24 19:24:34.772924] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:32:29.249 [2024-07-24 19:24:34.773112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1804829 ] 00:32:29.249 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.509 [2024-07-24 19:24:34.944774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.509 [2024-07-24 19:24:35.160510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:30.446 19:24:36 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:30.446 [2024-07-24 19:24:36.068686] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.446 null0 00:32:30.446 [2024-07-24 19:24:36.101283] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:30.446 [2024-07-24 19:24:36.102037] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:30.446 [2024-07-24 19:24:36.109254] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.446 19:24:36 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:30.446 [2024-07-24 19:24:36.125294] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:30.446 request: 00:32:30.446 { 00:32:30.446 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:30.446 "secure_channel": false, 00:32:30.446 "listen_address": { 00:32:30.446 "trtype": "tcp", 00:32:30.446 "traddr": "127.0.0.1", 00:32:30.446 "trsvcid": "4420" 00:32:30.446 }, 00:32:30.446 "method": "nvmf_subsystem_add_listener", 00:32:30.446 "req_id": 1 00:32:30.446 } 00:32:30.446 Got JSON-RPC error response 00:32:30.446 response: 00:32:30.446 { 00:32:30.446 "code": -32602, 00:32:30.446 "message": "Invalid parameters" 00:32:30.446 } 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:30.446 19:24:36 keyring_file -- keyring/file.sh@46 -- # bperfpid=1804978 00:32:30.446 19:24:36 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:30.446 19:24:36 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1804978 /var/tmp/bperf.sock 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1804978 ']' 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:30.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:30.446 19:24:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:30.705 [2024-07-24 19:24:36.180921] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:32:30.705 [2024-07-24 19:24:36.181012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1804978 ] 00:32:30.705 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.705 [2024-07-24 19:24:36.257289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.705 [2024-07-24 19:24:36.398591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.964 19:24:36 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:30.964 19:24:36 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:30.964 19:24:36 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:30.964 19:24:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:31.222 19:24:36 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FdVlyARWW8 00:32:31.222 19:24:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FdVlyARWW8 00:32:31.480 19:24:37 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:31.480 19:24:37 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:31.480 19:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.480 19:24:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.480 19:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:32.046 19:24:37 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.HFdj6pBw0q == \/\t\m\p\/\t\m\p\.\H\F\d\j\6\p\B\w\0\q ]] 00:32:32.046 19:24:37 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:32.046 19:24:37 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:32.046 19:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.046 19:24:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.046 19:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:32.304 19:24:37 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FdVlyARWW8 == \/\t\m\p\/\t\m\p\.\F\d\V\l\y\A\R\W\W\8 ]] 00:32:32.304 19:24:37 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:32.304 19:24:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:32.304 19:24:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.304 19:24:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.304 19:24:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.304 19:24:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:32.562 19:24:38 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:32.562 19:24:38 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:32.562 19:24:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:32.562 19:24:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.562 19:24:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.562 19:24:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.562 19:24:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:33.128 19:24:38 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:33.128 19:24:38 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.128 19:24:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.386 [2024-07-24 19:24:38.903191] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:33.386 nvme0n1 00:32:33.386 19:24:38 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:33.387 19:24:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:33.387 19:24:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.387 19:24:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.387 19:24:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.387 19:24:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.644 19:24:39 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:33.644 19:24:39 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:33.644 19:24:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:33.644 19:24:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.644 19:24:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.645 19:24:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.645 19:24:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:34.211 19:24:39 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:34.211 19:24:39 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:34.211 Running I/O for 1 seconds... 00:32:35.145 00:32:35.145 Latency(us) 00:32:35.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.145 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:35.145 nvme0n1 : 1.01 5542.36 21.65 0.00 0.00 22952.96 6553.60 34369.99 00:32:35.145 =================================================================================================================== 00:32:35.145 Total : 5542.36 21.65 0.00 0.00 22952.96 6553.60 34369.99 00:32:35.145 0 00:32:35.403 19:24:40 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:35.403 19:24:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:35.660 19:24:41 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:35.660 19:24:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:35.660 19:24:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.660 19:24:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.660 19:24:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.660 19:24:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.918 19:24:41 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:35.918 19:24:41 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:35.919 19:24:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:35.919 19:24:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.919 19:24:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.919 19:24:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.919 19:24:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:36.485 19:24:41 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:36.485 19:24:41 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:36.485 19:24:41 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:36.485 19:24:41 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:36.485 19:24:41 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:36.485 19:24:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:36.485 19:24:41 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:36.485 19:24:41 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:36.485 19:24:41 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:36.485 19:24:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:36.744 [2024-07-24 19:24:42.371760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:36.744 [2024-07-24 19:24:42.372052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11147a0 (107): Transport endpoint is not connected 00:32:36.744 [2024-07-24 19:24:42.373039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11147a0 (9): Bad file descriptor 00:32:36.744 [2024-07-24 19:24:42.374046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:36.744 [2024-07-24 19:24:42.374075] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:36.744 [2024-07-24 19:24:42.374105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:36.744 request: 00:32:36.744 { 00:32:36.744 "name": "nvme0", 00:32:36.744 "trtype": "tcp", 00:32:36.744 "traddr": "127.0.0.1", 00:32:36.744 "adrfam": "ipv4", 00:32:36.744 "trsvcid": "4420", 00:32:36.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:36.744 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:36.744 "prchk_reftag": false, 00:32:36.744 "prchk_guard": false, 00:32:36.744 "hdgst": false, 00:32:36.744 "ddgst": false, 00:32:36.744 "psk": "key1", 00:32:36.744 "method": "bdev_nvme_attach_controller", 00:32:36.744 "req_id": 1 00:32:36.744 } 00:32:36.744 Got JSON-RPC error response 00:32:36.744 response: 00:32:36.744 { 00:32:36.744 "code": -5, 00:32:36.744 "message": "Input/output error" 00:32:36.744 } 00:32:36.744 19:24:42 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:36.744 19:24:42 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:36.744 19:24:42 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:36.744 19:24:42 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:36.744 19:24:42 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:36.744 19:24:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:36.744 19:24:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.744 19:24:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.744 19:24:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.744 19:24:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:37.310 19:24:42 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:37.310 19:24:42 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:37.310 19:24:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:37.310 19:24:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.310 19:24:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.310 19:24:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.310 19:24:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:37.878 19:24:43 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:37.878 19:24:43 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:37.878 19:24:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:38.137 19:24:43 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:38.137 19:24:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:38.415 19:24:43 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:38.415 19:24:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:38.415 19:24:43 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:38.682 19:24:44 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:38.682 19:24:44 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.HFdj6pBw0q 00:32:38.682 19:24:44 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:38.682 19:24:44 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:38.682 19:24:44 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:38.682 19:24:44 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:38.682 19:24:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.682 19:24:44 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:38.682 19:24:44 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.682 19:24:44 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:38.682 19:24:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:38.940 [2024-07-24 19:24:44.607591] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HFdj6pBw0q': 0100660 00:32:38.940 [2024-07-24 19:24:44.607639] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:38.940 request: 00:32:38.940 { 00:32:38.940 "name": "key0", 00:32:38.940 "path": "/tmp/tmp.HFdj6pBw0q", 00:32:38.940 "method": "keyring_file_add_key", 00:32:38.940 "req_id": 1 00:32:38.940 } 00:32:38.940 Got JSON-RPC error response 00:32:38.940 response: 00:32:38.940 { 00:32:38.940 "code": -1, 00:32:38.940 "message": "Operation not permitted" 00:32:38.940 } 00:32:38.940 19:24:44 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:38.940 19:24:44 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.940 19:24:44 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.940 19:24:44 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.940 19:24:44 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.HFdj6pBw0q 00:32:38.940 19:24:44 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:39.197 19:24:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HFdj6pBw0q 00:32:39.455 19:24:44 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.HFdj6pBw0q 00:32:39.455 19:24:44 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:39.455 19:24:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:39.455 19:24:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:39.455 19:24:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.455 19:24:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:39.455 19:24:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.713 19:24:45 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:39.713 19:24:45 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:39.713 19:24:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:39.713 19:24:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:39.714 19:24:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:39.714 19:24:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.714 19:24:45 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:39.714 19:24:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:39.714 19:24:45 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:39.714 19:24:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:40.280 [2024-07-24 19:24:45.834920] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HFdj6pBw0q': No such file or directory 00:32:40.280 [2024-07-24 19:24:45.834971] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:40.280 [2024-07-24 19:24:45.835011] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:40.280 [2024-07-24 19:24:45.835042] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:40.280 [2024-07-24 19:24:45.835059] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:40.280 request: 00:32:40.280 { 00:32:40.280 "name": "nvme0", 00:32:40.280 "trtype": "tcp", 00:32:40.280 "traddr": "127.0.0.1", 00:32:40.280 "adrfam": "ipv4", 00:32:40.280 "trsvcid": "4420", 00:32:40.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:40.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:40.280 "prchk_reftag": false, 00:32:40.280 "prchk_guard": false, 00:32:40.280 "hdgst": false, 00:32:40.280 "ddgst": false, 00:32:40.280 "psk": "key0", 00:32:40.280 "method": "bdev_nvme_attach_controller", 00:32:40.280 "req_id": 1 00:32:40.280 } 00:32:40.280 Got JSON-RPC error response 00:32:40.280 response: 00:32:40.280 { 00:32:40.280 "code": -19, 00:32:40.280 "message": "No such device" 00:32:40.280 } 00:32:40.280 19:24:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:40.280 19:24:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:40.280 19:24:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:40.280 19:24:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:40.280 19:24:45 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:40.280 19:24:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:40.539 19:24:46 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Jedv2ZJykE 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:40.539 19:24:46 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:40.539 19:24:46 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:40.539 19:24:46 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:40.539 19:24:46 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:40.539 19:24:46 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:40.539 19:24:46 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Jedv2ZJykE 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Jedv2ZJykE 00:32:40.539 19:24:46 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Jedv2ZJykE 00:32:40.539 19:24:46 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Jedv2ZJykE 00:32:40.539 19:24:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Jedv2ZJykE 00:32:41.106 19:24:46 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:41.106 19:24:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:41.364 nvme0n1 00:32:41.364 19:24:46 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:41.364 19:24:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:41.364 19:24:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:41.364 19:24:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:41.364 19:24:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:41.364 19:24:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:41.931 19:24:47 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:41.931 19:24:47 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:41.931 19:24:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:42.497 19:24:47 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:42.497 19:24:47 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:42.497 19:24:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.497 19:24:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.497 19:24:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.755 19:24:48 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:42.755 19:24:48 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:42.755 19:24:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:42.755 19:24:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.755 19:24:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.755 19:24:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.755 19:24:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:43.013 19:24:48 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:43.013 19:24:48 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:43.013 19:24:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:43.271 19:24:48 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:43.271 19:24:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.271 19:24:48 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:43.837 19:24:49 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:43.837 19:24:49 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Jedv2ZJykE 00:32:43.837 19:24:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Jedv2ZJykE 00:32:44.095 19:24:49 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FdVlyARWW8 00:32:44.095 19:24:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FdVlyARWW8 00:32:44.353 19:24:49 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.353 19:24:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.919 nvme0n1 00:32:44.919 19:24:50 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:44.919 19:24:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:45.487 19:24:50 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:45.487 "subsystems": [ 00:32:45.487 { 00:32:45.487 "subsystem": "keyring", 00:32:45.487 "config": [ 00:32:45.487 { 00:32:45.487 "method": "keyring_file_add_key", 00:32:45.487 "params": { 00:32:45.487 "name": "key0", 00:32:45.487 "path": "/tmp/tmp.Jedv2ZJykE" 00:32:45.487 } 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "method": "keyring_file_add_key", 00:32:45.487 "params": { 00:32:45.487 "name": "key1", 00:32:45.487 "path": "/tmp/tmp.FdVlyARWW8" 00:32:45.487 } 00:32:45.487 } 00:32:45.487 ] 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "subsystem": "iobuf", 00:32:45.487 "config": [ 00:32:45.487 { 00:32:45.487 "method": "iobuf_set_options", 00:32:45.487 "params": { 00:32:45.487 "small_pool_count": 8192, 00:32:45.487 "large_pool_count": 1024, 00:32:45.487 "small_bufsize": 8192, 00:32:45.487 "large_bufsize": 135168 00:32:45.487 } 00:32:45.487 } 00:32:45.487 ] 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "subsystem": "sock", 00:32:45.487 "config": [ 00:32:45.487 { 00:32:45.487 "method": "sock_set_default_impl", 00:32:45.487 "params": { 00:32:45.487 "impl_name": "posix" 00:32:45.487 } 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "method": "sock_impl_set_options", 00:32:45.487 "params": { 00:32:45.487 "impl_name": "ssl", 00:32:45.487 "recv_buf_size": 4096, 00:32:45.487 "send_buf_size": 4096, 00:32:45.487 "enable_recv_pipe": true, 00:32:45.487 "enable_quickack": false, 00:32:45.487 "enable_placement_id": 0, 00:32:45.487 "enable_zerocopy_send_server": true, 00:32:45.487 "enable_zerocopy_send_client": false, 00:32:45.487 "zerocopy_threshold": 0, 00:32:45.487 "tls_version": 0, 00:32:45.487 "enable_ktls": false 00:32:45.487 } 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "method": "sock_impl_set_options", 00:32:45.487 "params": { 00:32:45.487 "impl_name": "posix", 00:32:45.487 "recv_buf_size": 2097152, 00:32:45.487 "send_buf_size": 2097152, 00:32:45.487 "enable_recv_pipe": true, 00:32:45.487 "enable_quickack": false, 00:32:45.487 "enable_placement_id": 0, 00:32:45.487 "enable_zerocopy_send_server": true, 00:32:45.487 "enable_zerocopy_send_client": false, 00:32:45.487 "zerocopy_threshold": 0, 00:32:45.487 "tls_version": 0, 00:32:45.487 "enable_ktls": false 00:32:45.487 } 00:32:45.487 } 00:32:45.487 ] 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "subsystem": "vmd", 00:32:45.487 "config": [] 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "subsystem": "accel", 00:32:45.487 "config": [ 00:32:45.487 { 00:32:45.487 "method": "accel_set_options", 00:32:45.487 "params": { 00:32:45.487 "small_cache_size": 128, 00:32:45.487 "large_cache_size": 16, 00:32:45.487 "task_count": 2048, 00:32:45.487 "sequence_count": 2048, 00:32:45.487 "buf_count": 2048 00:32:45.487 } 00:32:45.487 } 00:32:45.487 ] 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "subsystem": "bdev", 00:32:45.487 "config": [ 00:32:45.487 { 00:32:45.487 "method": "bdev_set_options", 00:32:45.487 "params": { 00:32:45.487 "bdev_io_pool_size": 65535, 00:32:45.487 "bdev_io_cache_size": 256, 00:32:45.487 "bdev_auto_examine": true, 00:32:45.487 "iobuf_small_cache_size": 128, 00:32:45.487 "iobuf_large_cache_size": 16 00:32:45.487 } 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "method": "bdev_raid_set_options", 00:32:45.487 "params": { 00:32:45.487 "process_window_size_kb": 1024, 00:32:45.487 "process_max_bandwidth_mb_sec": 0 00:32:45.487 } 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "method": "bdev_iscsi_set_options", 00:32:45.487 "params": { 00:32:45.487 "timeout_sec": 30 00:32:45.487 } 00:32:45.487 }, 00:32:45.487 { 00:32:45.487 "method": "bdev_nvme_set_options", 00:32:45.487 "params": { 00:32:45.487 "action_on_timeout": "none", 00:32:45.487 "timeout_us": 0, 00:32:45.487 "timeout_admin_us": 0, 00:32:45.487 "keep_alive_timeout_ms": 10000, 00:32:45.487 "arbitration_burst": 0, 00:32:45.487 "low_priority_weight": 0, 00:32:45.487 "medium_priority_weight": 0, 00:32:45.487 "high_priority_weight": 0, 00:32:45.487 "nvme_adminq_poll_period_us": 10000, 00:32:45.487 "nvme_ioq_poll_period_us": 0, 00:32:45.487 "io_queue_requests": 512, 00:32:45.487 "delay_cmd_submit": true, 00:32:45.487 "transport_retry_count": 4, 00:32:45.487 "bdev_retry_count": 3, 00:32:45.487 "transport_ack_timeout": 0, 00:32:45.487 "ctrlr_loss_timeout_sec": 0, 00:32:45.487 "reconnect_delay_sec": 0, 00:32:45.487 "fast_io_fail_timeout_sec": 0, 00:32:45.487 "disable_auto_failback": false, 00:32:45.487 "generate_uuids": false, 00:32:45.487 "transport_tos": 0, 00:32:45.487 "nvme_error_stat": false, 00:32:45.487 "rdma_srq_size": 0, 00:32:45.487 "io_path_stat": false, 00:32:45.487 "allow_accel_sequence": false, 00:32:45.487 "rdma_max_cq_size": 0, 00:32:45.487 "rdma_cm_event_timeout_ms": 0, 00:32:45.487 "dhchap_digests": [ 00:32:45.487 "sha256", 00:32:45.487 "sha384", 00:32:45.487 "sha512" 00:32:45.487 ], 00:32:45.487 "dhchap_dhgroups": [ 00:32:45.487 "null", 00:32:45.487 "ffdhe2048", 00:32:45.487 "ffdhe3072", 00:32:45.487 "ffdhe4096", 00:32:45.488 "ffdhe6144", 00:32:45.488 "ffdhe8192" 00:32:45.488 ] 00:32:45.488 } 00:32:45.488 }, 00:32:45.488 { 00:32:45.488 "method": "bdev_nvme_attach_controller", 00:32:45.488 "params": { 00:32:45.488 "name": "nvme0", 00:32:45.488 "trtype": "TCP", 00:32:45.488 "adrfam": "IPv4", 00:32:45.488 "traddr": "127.0.0.1", 00:32:45.488 "trsvcid": "4420", 00:32:45.488 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.488 "prchk_reftag": false, 00:32:45.488 "prchk_guard": false, 00:32:45.488 "ctrlr_loss_timeout_sec": 0, 00:32:45.488 "reconnect_delay_sec": 0, 00:32:45.488 "fast_io_fail_timeout_sec": 0, 00:32:45.488 "psk": "key0", 00:32:45.488 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.488 "hdgst": false, 00:32:45.488 "ddgst": false 00:32:45.488 } 00:32:45.488 }, 00:32:45.488 { 00:32:45.488 "method": "bdev_nvme_set_hotplug", 00:32:45.488 "params": { 00:32:45.488 "period_us": 100000, 00:32:45.488 "enable": false 00:32:45.488 } 00:32:45.488 }, 00:32:45.488 { 00:32:45.488 "method": "bdev_wait_for_examine" 00:32:45.488 } 00:32:45.488 ] 00:32:45.488 }, 00:32:45.488 { 00:32:45.488 "subsystem": "nbd", 00:32:45.488 "config": [] 00:32:45.488 } 00:32:45.488 ] 00:32:45.488 }' 00:32:45.488 19:24:50 keyring_file -- keyring/file.sh@114 -- # killprocess 1804978 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1804978 ']' 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1804978 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1804978 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1804978' 00:32:45.488 killing process with pid 1804978 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@969 -- # kill 1804978 00:32:45.488 Received shutdown signal, test time was about 1.000000 seconds 00:32:45.488 00:32:45.488 Latency(us) 00:32:45.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.488 =================================================================================================================== 00:32:45.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.488 19:24:50 keyring_file -- common/autotest_common.sh@974 -- # wait 1804978 00:32:45.747 19:24:51 keyring_file -- keyring/file.sh@117 -- # bperfpid=1806836 00:32:45.747 19:24:51 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1806836 /var/tmp/bperf.sock 00:32:45.747 19:24:51 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1806836 ']' 00:32:45.747 19:24:51 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:45.747 19:24:51 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:45.747 19:24:51 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:45.747 19:24:51 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:45.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:45.747 19:24:51 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:45.747 19:24:51 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:45.747 19:24:51 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:45.747 "subsystems": [ 00:32:45.747 { 00:32:45.747 "subsystem": "keyring", 00:32:45.747 "config": [ 00:32:45.747 { 00:32:45.747 "method": "keyring_file_add_key", 00:32:45.747 "params": { 00:32:45.747 "name": "key0", 00:32:45.747 "path": "/tmp/tmp.Jedv2ZJykE" 00:32:45.747 } 00:32:45.747 }, 00:32:45.747 { 00:32:45.747 "method": "keyring_file_add_key", 00:32:45.747 "params": { 00:32:45.747 "name": "key1", 00:32:45.747 "path": "/tmp/tmp.FdVlyARWW8" 00:32:45.747 } 00:32:45.747 } 00:32:45.747 ] 00:32:45.747 }, 00:32:45.747 { 00:32:45.747 "subsystem": "iobuf", 00:32:45.747 "config": [ 00:32:45.747 { 00:32:45.747 "method": "iobuf_set_options", 00:32:45.747 "params": { 00:32:45.747 "small_pool_count": 8192, 00:32:45.747 "large_pool_count": 1024, 00:32:45.747 "small_bufsize": 8192, 00:32:45.747 "large_bufsize": 135168 00:32:45.747 } 00:32:45.747 } 00:32:45.747 ] 00:32:45.747 }, 00:32:45.747 { 00:32:45.747 "subsystem": "sock", 00:32:45.747 "config": [ 00:32:45.747 { 00:32:45.747 "method": "sock_set_default_impl", 00:32:45.747 "params": { 00:32:45.747 "impl_name": "posix" 00:32:45.747 } 00:32:45.747 }, 00:32:45.747 { 00:32:45.747 "method": "sock_impl_set_options", 00:32:45.747 "params": { 00:32:45.747 "impl_name": "ssl", 00:32:45.747 "recv_buf_size": 4096, 00:32:45.747 "send_buf_size": 4096, 00:32:45.747 "enable_recv_pipe": true, 00:32:45.747 "enable_quickack": false, 00:32:45.747 "enable_placement_id": 0, 00:32:45.747 "enable_zerocopy_send_server": true, 00:32:45.747 "enable_zerocopy_send_client": false, 00:32:45.747 "zerocopy_threshold": 0, 00:32:45.747 "tls_version": 0, 00:32:45.747 "enable_ktls": false 00:32:45.747 } 00:32:45.747 }, 00:32:45.747 { 00:32:45.747 "method": "sock_impl_set_options", 00:32:45.747 "params": { 00:32:45.747 "impl_name": "posix", 00:32:45.747 "recv_buf_size": 2097152, 00:32:45.747 "send_buf_size": 2097152, 00:32:45.747 "enable_recv_pipe": true, 00:32:45.747 "enable_quickack": false, 00:32:45.747 "enable_placement_id": 0, 00:32:45.747 "enable_zerocopy_send_server": true, 00:32:45.747 "enable_zerocopy_send_client": false, 00:32:45.747 "zerocopy_threshold": 0, 00:32:45.747 "tls_version": 0, 00:32:45.747 "enable_ktls": false 00:32:45.747 } 00:32:45.747 } 00:32:45.747 ] 00:32:45.747 }, 00:32:45.747 { 00:32:45.747 "subsystem": "vmd", 00:32:45.747 "config": [] 00:32:45.747 }, 00:32:45.747 { 00:32:45.747 "subsystem": "accel", 00:32:45.747 "config": [ 00:32:45.747 { 00:32:45.747 "method": "accel_set_options", 00:32:45.747 "params": { 00:32:45.747 "small_cache_size": 128, 00:32:45.747 "large_cache_size": 16, 00:32:45.747 "task_count": 2048, 00:32:45.748 "sequence_count": 2048, 00:32:45.748 "buf_count": 2048 00:32:45.748 } 00:32:45.748 } 00:32:45.748 ] 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "subsystem": "bdev", 00:32:45.748 "config": [ 00:32:45.748 { 00:32:45.748 "method": "bdev_set_options", 00:32:45.748 "params": { 00:32:45.748 "bdev_io_pool_size": 65535, 00:32:45.748 "bdev_io_cache_size": 256, 00:32:45.748 "bdev_auto_examine": true, 00:32:45.748 "iobuf_small_cache_size": 128, 00:32:45.748 "iobuf_large_cache_size": 16 00:32:45.748 } 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "method": "bdev_raid_set_options", 00:32:45.748 "params": { 00:32:45.748 "process_window_size_kb": 1024, 00:32:45.748 "process_max_bandwidth_mb_sec": 0 00:32:45.748 } 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "method": "bdev_iscsi_set_options", 00:32:45.748 "params": { 00:32:45.748 "timeout_sec": 30 00:32:45.748 } 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "method": "bdev_nvme_set_options", 00:32:45.748 "params": { 00:32:45.748 "action_on_timeout": "none", 00:32:45.748 "timeout_us": 0, 00:32:45.748 "timeout_admin_us": 0, 00:32:45.748 "keep_alive_timeout_ms": 10000, 00:32:45.748 "arbitration_burst": 0, 00:32:45.748 "low_priority_weight": 0, 00:32:45.748 "medium_priority_weight": 0, 00:32:45.748 "high_priority_weight": 0, 00:32:45.748 "nvme_adminq_poll_period_us": 10000, 00:32:45.748 "nvme_ioq_poll_period_us": 0, 00:32:45.748 "io_queue_requests": 512, 00:32:45.748 "delay_cmd_submit": true, 00:32:45.748 "transport_retry_count": 4, 00:32:45.748 "bdev_retry_count": 3, 00:32:45.748 "transport_ack_timeout": 0, 00:32:45.748 "ctrlr_loss_timeout_sec": 0, 00:32:45.748 "reconnect_delay_sec": 0, 00:32:45.748 "fast_io_fail_timeout_sec": 0, 00:32:45.748 "disable_auto_failback": false, 00:32:45.748 "generate_uuids": false, 00:32:45.748 "transport_tos": 0, 00:32:45.748 "nvme_error_stat": false, 00:32:45.748 "rdma_srq_size": 0, 00:32:45.748 "io_path_stat": false, 00:32:45.748 "allow_accel_sequence": false, 00:32:45.748 "rdma_max_cq_size": 0, 00:32:45.748 "rdma_cm_event_timeout_ms": 0, 00:32:45.748 "dhchap_digests": [ 00:32:45.748 "sha256", 00:32:45.748 "sha384", 00:32:45.748 "sha512" 00:32:45.748 ], 00:32:45.748 "dhchap_dhgroups": [ 00:32:45.748 "null", 00:32:45.748 "ffdhe2048", 00:32:45.748 "ffdhe3072", 00:32:45.748 "ffdhe4096", 00:32:45.748 "ffdhe6144", 00:32:45.748 "ffdhe8192" 00:32:45.748 ] 00:32:45.748 } 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "method": "bdev_nvme_attach_controller", 00:32:45.748 "params": { 00:32:45.748 "name": "nvme0", 00:32:45.748 "trtype": "TCP", 00:32:45.748 "adrfam": "IPv4", 00:32:45.748 "traddr": "127.0.0.1", 00:32:45.748 "trsvcid": "4420", 00:32:45.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.748 "prchk_reftag": false, 00:32:45.748 "prchk_guard": false, 00:32:45.748 "ctrlr_loss_timeout_sec": 0, 00:32:45.748 "reconnect_delay_sec": 0, 00:32:45.748 "fast_io_fail_timeout_sec": 0, 00:32:45.748 "psk": "key0", 00:32:45.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.748 "hdgst": false, 00:32:45.748 "ddgst": false 00:32:45.748 } 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "method": "bdev_nvme_set_hotplug", 00:32:45.748 "params": { 00:32:45.748 "period_us": 100000, 00:32:45.748 "enable": false 00:32:45.748 } 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "method": "bdev_wait_for_examine" 00:32:45.748 } 00:32:45.748 ] 00:32:45.748 }, 00:32:45.748 { 00:32:45.748 "subsystem": "nbd", 00:32:45.748 "config": [] 00:32:45.748 } 00:32:45.748 ] 00:32:45.748 }' 00:32:45.748 [2024-07-24 19:24:51.294076] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:32:45.748 [2024-07-24 19:24:51.294186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806836 ] 00:32:45.748 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.748 [2024-07-24 19:24:51.375254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.007 [2024-07-24 19:24:51.516447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.265 [2024-07-24 19:24:51.724789] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:46.265 19:24:51 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:46.265 19:24:51 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:46.265 19:24:51 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:46.265 19:24:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.265 19:24:51 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:46.523 19:24:52 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:46.523 19:24:52 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:46.523 19:24:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:46.523 19:24:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:46.523 19:24:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:46.523 19:24:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:46.523 19:24:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.090 19:24:52 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:47.090 19:24:52 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:47.090 19:24:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:47.091 19:24:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.091 19:24:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.091 19:24:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.091 19:24:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:47.656 19:24:53 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:47.656 19:24:53 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:47.656 19:24:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:47.656 19:24:53 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:47.657 19:24:53 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:47.657 19:24:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:47.657 19:24:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Jedv2ZJykE /tmp/tmp.FdVlyARWW8 00:32:47.657 19:24:53 keyring_file -- keyring/file.sh@20 -- # killprocess 1806836 00:32:47.657 19:24:53 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1806836 ']' 00:32:47.657 19:24:53 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1806836 00:32:47.657 19:24:53 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:47.913 19:24:53 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:47.914 19:24:53 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1806836 00:32:47.914 19:24:53 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:47.914 19:24:53 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:47.914 19:24:53 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1806836' 00:32:47.914 killing process with pid 1806836 00:32:47.914 19:24:53 keyring_file -- common/autotest_common.sh@969 -- # kill 1806836 00:32:47.914 Received shutdown signal, test time was about 1.000000 seconds 00:32:47.914 00:32:47.914 Latency(us) 00:32:47.914 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.914 =================================================================================================================== 00:32:47.914 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:47.914 19:24:53 keyring_file -- common/autotest_common.sh@974 -- # wait 1806836 00:32:48.171 19:24:53 keyring_file -- keyring/file.sh@21 -- # killprocess 1804829 00:32:48.171 19:24:53 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1804829 ']' 00:32:48.171 19:24:53 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1804829 00:32:48.171 19:24:53 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:48.171 19:24:53 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:48.171 19:24:53 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1804829 00:32:48.171 19:24:53 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:48.172 19:24:53 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:48.172 19:24:53 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1804829' 00:32:48.172 killing process with pid 1804829 00:32:48.172 19:24:53 keyring_file -- common/autotest_common.sh@969 -- # kill 1804829 00:32:48.172 [2024-07-24 19:24:53.721899] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:48.172 19:24:53 keyring_file -- common/autotest_common.sh@974 -- # wait 1804829 00:32:48.740 00:32:48.740 real 0m19.916s 00:32:48.740 user 0m49.911s 00:32:48.740 sys 0m4.390s 00:32:48.740 19:24:54 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.740 19:24:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:48.740 ************************************ 00:32:48.740 END TEST keyring_file 00:32:48.740 ************************************ 00:32:48.740 19:24:54 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:48.740 19:24:54 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:48.740 19:24:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:48.740 19:24:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:48.740 19:24:54 -- common/autotest_common.sh@10 -- # set +x 00:32:48.740 ************************************ 00:32:48.740 START TEST keyring_linux 00:32:48.740 ************************************ 00:32:48.740 19:24:54 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:48.999 * Looking for test storage... 00:32:48.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.999 19:24:54 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.999 19:24:54 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.999 19:24:54 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.999 19:24:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.999 19:24:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.999 19:24:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.999 19:24:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:48.999 19:24:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:48.999 /tmp/:spdk-test:key0 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:48.999 19:24:54 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:48.999 19:24:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:48.999 /tmp/:spdk-test:key1 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1807318 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:48.999 19:24:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1807318 00:32:48.999 19:24:54 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1807318 ']' 00:32:48.999 19:24:54 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.999 19:24:54 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.999 19:24:54 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.999 19:24:54 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.999 19:24:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.258 [2024-07-24 19:24:54.740536] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:32:49.258 [2024-07-24 19:24:54.740656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807318 ] 00:32:49.258 EAL: No free 2048 kB hugepages reported on node 1 00:32:49.258 [2024-07-24 19:24:54.843219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.516 [2024-07-24 19:24:54.983273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:49.774 19:24:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.774 [2024-07-24 19:24:55.289580] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.774 null0 00:32:49.774 [2024-07-24 19:24:55.321629] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:49.774 [2024-07-24 19:24:55.322205] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.774 19:24:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:49.774 903254915 00:32:49.774 19:24:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:49.774 941583570 00:32:49.774 19:24:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1807454 00:32:49.774 19:24:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:49.774 19:24:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1807454 /var/tmp/bperf.sock 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1807454 ']' 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.774 19:24:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.774 [2024-07-24 19:24:55.420933] Starting SPDK v24.09-pre git sha1 74f92fe69 / DPDK 24.03.0 initialization... 00:32:49.774 [2024-07-24 19:24:55.421107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807454 ] 00:32:50.032 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.032 [2024-07-24 19:24:55.534843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.032 [2024-07-24 19:24:55.673873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.290 19:24:55 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.290 19:24:55 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:50.290 19:24:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:50.290 19:24:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:50.855 19:24:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:50.855 19:24:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:51.447 19:24:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:51.447 19:24:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:51.447 [2024-07-24 19:24:57.098589] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:51.706 nvme0n1 00:32:51.706 19:24:57 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:51.706 19:24:57 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:51.706 19:24:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:51.706 19:24:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:51.706 19:24:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:51.706 19:24:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.964 19:24:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:51.964 19:24:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:51.964 19:24:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:51.964 19:24:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:51.964 19:24:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.964 19:24:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:51.964 19:24:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.222 19:24:57 keyring_linux -- keyring/linux.sh@25 -- # sn=903254915 00:32:52.222 19:24:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:52.223 19:24:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:52.223 19:24:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 903254915 == \9\0\3\2\5\4\9\1\5 ]] 00:32:52.223 19:24:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 903254915 00:32:52.223 19:24:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:52.223 19:24:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:52.223 Running I/O for 1 seconds... 00:32:53.598 00:32:53.598 Latency(us) 00:32:53.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:53.598 nvme0n1 : 1.02 5349.42 20.90 0.00 0.00 23729.28 9514.86 35340.89 00:32:53.598 =================================================================================================================== 00:32:53.598 Total : 5349.42 20.90 0.00 0.00 23729.28 9514.86 35340.89 00:32:53.598 0 00:32:53.598 19:24:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:53.598 19:24:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:53.856 19:24:59 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:53.856 19:24:59 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:53.856 19:24:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:53.856 19:24:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:53.856 19:24:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.856 19:24:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:54.114 19:24:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:54.115 19:24:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:54.115 19:24:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:54.115 19:24:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.115 19:24:59 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:54.115 19:24:59 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.115 19:24:59 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:54.115 19:24:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:54.115 19:24:59 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:54.115 19:24:59 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:54.115 19:24:59 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.115 19:24:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:54.373 [2024-07-24 19:24:59.904567] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:54.373 [2024-07-24 19:24:59.905003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dafe0 (107): Transport endpoint is not connected 00:32:54.373 [2024-07-24 19:24:59.905988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dafe0 (9): Bad file descriptor 00:32:54.373 [2024-07-24 19:24:59.906985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:54.373 [2024-07-24 19:24:59.907012] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:54.373 [2024-07-24 19:24:59.907030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:54.373 request: 00:32:54.373 { 00:32:54.373 "name": "nvme0", 00:32:54.373 "trtype": "tcp", 00:32:54.373 "traddr": "127.0.0.1", 00:32:54.373 "adrfam": "ipv4", 00:32:54.373 "trsvcid": "4420", 00:32:54.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:54.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:54.373 "prchk_reftag": false, 00:32:54.373 "prchk_guard": false, 00:32:54.373 "hdgst": false, 00:32:54.373 "ddgst": false, 00:32:54.373 "psk": ":spdk-test:key1", 00:32:54.373 "method": "bdev_nvme_attach_controller", 00:32:54.373 "req_id": 1 00:32:54.373 } 00:32:54.373 Got JSON-RPC error response 00:32:54.373 response: 00:32:54.373 { 00:32:54.373 "code": -5, 00:32:54.373 "message": "Input/output error" 00:32:54.373 } 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@33 -- # sn=903254915 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 903254915 00:32:54.373 1 links removed 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@33 -- # sn=941583570 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 941583570 00:32:54.373 1 links removed 00:32:54.373 19:24:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1807454 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1807454 ']' 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1807454 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1807454 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1807454' 00:32:54.373 killing process with pid 1807454 00:32:54.373 19:24:59 keyring_linux -- common/autotest_common.sh@969 -- # kill 1807454 00:32:54.373 Received shutdown signal, test time was about 1.000000 seconds 00:32:54.373 00:32:54.373 Latency(us) 00:32:54.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.373 =================================================================================================================== 00:32:54.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.374 19:24:59 keyring_linux -- common/autotest_common.sh@974 -- # wait 1807454 00:32:54.632 19:25:00 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1807318 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1807318 ']' 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1807318 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1807318 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1807318' 00:32:54.632 killing process with pid 1807318 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@969 -- # kill 1807318 00:32:54.632 19:25:00 keyring_linux -- common/autotest_common.sh@974 -- # wait 1807318 00:32:55.570 00:32:55.570 real 0m6.548s 00:32:55.570 user 0m12.740s 00:32:55.570 sys 0m1.986s 00:32:55.570 19:25:00 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:55.570 19:25:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:55.570 ************************************ 00:32:55.570 END TEST keyring_linux 00:32:55.570 ************************************ 00:32:55.570 19:25:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:55.570 19:25:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:55.570 19:25:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:55.570 19:25:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:55.570 19:25:00 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:55.570 19:25:00 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:55.570 19:25:00 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:55.570 19:25:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:55.570 19:25:00 -- common/autotest_common.sh@10 -- # set +x 00:32:55.570 19:25:00 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:55.570 19:25:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:55.570 19:25:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:55.570 19:25:00 -- common/autotest_common.sh@10 -- # set +x 00:32:57.477 INFO: APP EXITING 00:32:57.477 INFO: killing all VMs 00:32:57.477 INFO: killing vhost app 00:32:57.477 INFO: EXIT DONE 00:32:58.853 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:32:58.853 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:58.853 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:58.853 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:59.112 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:59.112 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:59.112 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:59.112 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:59.112 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:59.112 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:59.112 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:59.112 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:59.112 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:59.112 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:59.112 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:59.112 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:59.112 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:33:01.017 Cleaning 00:33:01.017 Removing: /var/run/dpdk/spdk0/config 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:01.017 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:01.017 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:01.017 Removing: /var/run/dpdk/spdk1/config 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:01.017 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:01.017 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:01.018 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:01.018 Removing: /var/run/dpdk/spdk2/config 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:01.018 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:01.018 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:01.018 Removing: /var/run/dpdk/spdk3/config 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:01.018 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:01.018 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:01.018 Removing: /var/run/dpdk/spdk4/config 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:01.018 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:01.018 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:01.018 Removing: /dev/shm/bdev_svc_trace.1 00:33:01.018 Removing: /dev/shm/nvmf_trace.0 00:33:01.018 Removing: /dev/shm/spdk_tgt_trace.pid1522071 00:33:01.018 Removing: /var/run/dpdk/spdk0 00:33:01.018 Removing: /var/run/dpdk/spdk1 00:33:01.018 Removing: /var/run/dpdk/spdk2 00:33:01.018 Removing: /var/run/dpdk/spdk3 00:33:01.018 Removing: /var/run/dpdk/spdk4 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1520261 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1521126 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1522071 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1522547 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1523208 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1523478 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1524200 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1524329 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1524584 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1526662 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1527900 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1528272 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1528557 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1528798 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1529114 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1529276 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1529449 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1529740 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1530054 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1532944 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1533238 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1533536 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1533664 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1534228 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1534364 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1534924 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1534940 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1535233 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1535459 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1535667 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1535793 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1536301 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1536504 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1536782 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1539125 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1542119 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1549343 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1549806 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1552474 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1552753 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1555660 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1560432 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1562996 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1570114 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1575615 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1576935 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1577603 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1588703 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1591608 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1619483 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1622799 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1627144 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1632018 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1632024 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1632643 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1633207 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1633873 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1634268 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1634272 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1634531 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1634666 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1634679 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1635330 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1635867 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1636523 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1636924 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1637052 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1637194 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1638459 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1639316 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1644787 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1678620 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1681827 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1683001 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1684319 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1684460 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1684614 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1684756 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1685313 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1686628 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1687575 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1688054 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1689791 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1690353 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1690917 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1693699 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1699832 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1702782 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1707182 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1708138 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1709351 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1712207 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1714707 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1719221 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1719231 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1722272 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1722408 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1722538 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1722875 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1722930 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1725848 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1726301 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1729245 00:33:01.018 Removing: /var/run/dpdk/spdk_pid1731222 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1734936 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1738617 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1746518 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1751020 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1751086 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1765954 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1766483 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1766893 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1767553 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1768304 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1768920 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1769949 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1770375 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1773134 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1773277 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1777202 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1777275 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1778990 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1784174 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1784179 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1787355 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1788754 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1790155 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1790895 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1792302 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1793181 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1798826 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1799206 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1799757 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1801667 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1801976 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1802367 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1804829 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1804978 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1806836 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1807318 00:33:01.278 Removing: /var/run/dpdk/spdk_pid1807454 00:33:01.278 Clean 00:33:01.278 19:25:06 -- common/autotest_common.sh@1451 -- # return 0 00:33:01.278 19:25:06 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:33:01.278 19:25:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:01.278 19:25:06 -- common/autotest_common.sh@10 -- # set +x 00:33:01.278 19:25:06 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:33:01.278 19:25:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:01.278 19:25:06 -- common/autotest_common.sh@10 -- # set +x 00:33:01.278 19:25:06 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:01.278 19:25:06 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:01.278 19:25:06 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:01.278 19:25:06 -- spdk/autotest.sh@395 -- # hash lcov 00:33:01.278 19:25:06 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:01.278 19:25:06 -- spdk/autotest.sh@397 -- # hostname 00:33:01.278 19:25:06 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:01.537 geninfo: WARNING: invalid characters removed from testname! 00:33:57.793 19:25:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:57.793 19:26:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:00.330 19:26:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:04.529 19:26:09 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:07.827 19:26:13 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:12.021 19:26:16 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:15.324 19:26:20 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:15.325 19:26:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.325 19:26:20 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:15.325 19:26:20 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.325 19:26:20 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.325 19:26:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.325 19:26:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.325 19:26:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.325 19:26:20 -- paths/export.sh@5 -- $ export PATH 00:34:15.325 19:26:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.325 19:26:20 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:15.325 19:26:20 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:15.325 19:26:20 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721841980.XXXXXX 00:34:15.325 19:26:20 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721841980.zuYc7D 00:34:15.325 19:26:20 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:15.325 19:26:20 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:15.325 19:26:20 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:15.325 19:26:20 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:15.325 19:26:20 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:15.325 19:26:20 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:15.325 19:26:20 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:34:15.325 19:26:20 -- common/autotest_common.sh@10 -- $ set +x 00:34:15.325 19:26:20 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:15.325 19:26:20 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:15.325 19:26:20 -- pm/common@17 -- $ local monitor 00:34:15.325 19:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.325 19:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.325 19:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.325 19:26:20 -- pm/common@21 -- $ date +%s 00:34:15.325 19:26:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.325 19:26:20 -- pm/common@21 -- $ date +%s 00:34:15.325 19:26:20 -- pm/common@25 -- $ sleep 1 00:34:15.325 19:26:20 -- pm/common@21 -- $ date +%s 00:34:15.325 19:26:20 -- pm/common@21 -- $ date +%s 00:34:15.325 19:26:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841980 00:34:15.325 19:26:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841980 00:34:15.325 19:26:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841980 00:34:15.325 19:26:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721841980 00:34:15.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841980_collect-vmstat.pm.log 00:34:15.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841980_collect-cpu-load.pm.log 00:34:15.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841980_collect-cpu-temp.pm.log 00:34:15.325 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721841980_collect-bmc-pm.bmc.pm.log 00:34:16.263 19:26:21 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:16.263 19:26:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:34:16.263 19:26:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:16.263 19:26:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:16.263 19:26:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:16.263 19:26:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:16.263 19:26:21 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:16.263 19:26:21 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:16.263 19:26:21 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:16.263 19:26:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:16.263 19:26:21 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:16.263 19:26:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:16.263 19:26:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:16.263 19:26:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:16.263 19:26:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:16.263 19:26:21 -- pm/common@44 -- $ pid=1818048 00:34:16.263 19:26:21 -- pm/common@50 -- $ kill -TERM 1818048 00:34:16.263 19:26:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:16.263 19:26:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:16.263 19:26:21 -- pm/common@44 -- $ pid=1818050 00:34:16.263 19:26:21 -- pm/common@50 -- $ kill -TERM 1818050 00:34:16.263 19:26:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:16.263 19:26:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:16.263 19:26:21 -- pm/common@44 -- $ pid=1818052 00:34:16.263 19:26:21 -- pm/common@50 -- $ kill -TERM 1818052 00:34:16.263 19:26:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:16.263 19:26:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:16.263 19:26:21 -- pm/common@44 -- $ pid=1818088 00:34:16.263 19:26:21 -- pm/common@50 -- $ sudo -E kill -TERM 1818088 00:34:16.263 + [[ -n 1431525 ]] 00:34:16.263 + sudo kill 1431525 00:34:16.273 [Pipeline] } 00:34:16.292 [Pipeline] // stage 00:34:16.298 [Pipeline] } 00:34:16.316 [Pipeline] // timeout 00:34:16.324 [Pipeline] } 00:34:16.342 [Pipeline] // catchError 00:34:16.348 [Pipeline] } 00:34:16.366 [Pipeline] // wrap 00:34:16.373 [Pipeline] } 00:34:16.389 [Pipeline] // catchError 00:34:16.400 [Pipeline] stage 00:34:16.402 [Pipeline] { (Epilogue) 00:34:16.418 [Pipeline] catchError 00:34:16.420 [Pipeline] { 00:34:16.436 [Pipeline] echo 00:34:16.439 Cleanup processes 00:34:16.446 [Pipeline] sh 00:34:16.733 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:16.733 1818207 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:16.733 1818315 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:16.748 [Pipeline] sh 00:34:17.092 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:17.092 ++ grep -v 'sudo pgrep' 00:34:17.092 ++ awk '{print $1}' 00:34:17.092 + sudo kill -9 1818207 00:34:17.105 [Pipeline] sh 00:34:17.391 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:35.492 [Pipeline] sh 00:34:35.775 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:36.033 Artifacts sizes are good 00:34:36.047 [Pipeline] archiveArtifacts 00:34:36.054 Archiving artifacts 00:34:36.299 [Pipeline] sh 00:34:36.583 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:36.598 [Pipeline] cleanWs 00:34:36.608 [WS-CLEANUP] Deleting project workspace... 00:34:36.608 [WS-CLEANUP] Deferred wipeout is used... 00:34:36.614 [WS-CLEANUP] done 00:34:36.616 [Pipeline] } 00:34:36.635 [Pipeline] // catchError 00:34:36.646 [Pipeline] sh 00:34:36.927 + logger -p user.info -t JENKINS-CI 00:34:36.936 [Pipeline] } 00:34:36.951 [Pipeline] // stage 00:34:36.957 [Pipeline] } 00:34:36.974 [Pipeline] // node 00:34:36.979 [Pipeline] End of Pipeline 00:34:37.008 Finished: SUCCESS